Test Report: KVM_Linux_crio 22075

                    
                      c94fefe0767efbb16e4437178ac98ccfb9cdab86:2025-12-09:42695
                    
                

Test fail (6/431)

x
+
TestAddons/parallel/Ingress (158.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-192260 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-192260 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-192260 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [07d1dbac-13dc-41e6-9fdd-5ba0ff90bb24] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [07d1dbac-13dc-41e6-9fdd-5ba0ff90bb24] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004026877s
I1208 23:07:08.114614  748930 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-192260 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.584745663s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-192260 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.248
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-192260 -n addons-192260
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-192260 logs -n 25: (1.146989457s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-595699                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-595699 │ jenkins │ v1.37.0 │ 08 Dec 25 23:04 UTC │ 08 Dec 25 23:04 UTC │
	│ start   │ --download-only -p binary-mirror-322867 --alsologtostderr --binary-mirror http://127.0.0.1:43611 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-322867 │ jenkins │ v1.37.0 │ 08 Dec 25 23:04 UTC │                     │
	│ delete  │ -p binary-mirror-322867                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-322867 │ jenkins │ v1.37.0 │ 08 Dec 25 23:04 UTC │ 08 Dec 25 23:04 UTC │
	│ addons  │ disable dashboard -p addons-192260                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:04 UTC │                     │
	│ addons  │ enable dashboard -p addons-192260                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:04 UTC │                     │
	│ start   │ -p addons-192260 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:04 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ addons-192260 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ addons-192260 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ enable headlamp -p addons-192260 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ addons-192260 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ addons-192260 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ ssh     │ addons-192260 ssh cat /opt/local-path-provisioner/pvc-b5bd1323-8a56-4e58-93b7-550ac9856f8e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ addons-192260 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:07 UTC │
	│ addons  │ addons-192260 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ ip      │ addons-192260 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ addons-192260 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ addons-192260 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:06 UTC │
	│ addons  │ addons-192260 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:06 UTC │ 08 Dec 25 23:07 UTC │
	│ ssh     │ addons-192260 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:07 UTC │                     │
	│ addons  │ addons-192260 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:07 UTC │ 08 Dec 25 23:07 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-192260                                                                                                                                                                                                                                                                                                                                                                                         │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:07 UTC │ 08 Dec 25 23:07 UTC │
	│ addons  │ addons-192260 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:07 UTC │ 08 Dec 25 23:07 UTC │
	│ addons  │ addons-192260 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:07 UTC │ 08 Dec 25 23:07 UTC │
	│ addons  │ addons-192260 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:07 UTC │ 08 Dec 25 23:07 UTC │
	│ ip      │ addons-192260 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-192260        │ jenkins │ v1.37.0 │ 08 Dec 25 23:09 UTC │ 08 Dec 25 23:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 23:04:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 23:04:01.847982  749871 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:04:01.848335  749871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:04:01.848345  749871 out.go:374] Setting ErrFile to fd 2...
	I1208 23:04:01.848349  749871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:04:01.848588  749871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:04:01.849164  749871 out.go:368] Setting JSON to false
	I1208 23:04:01.850118  749871 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6382,"bootTime":1765228660,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:04:01.850184  749871 start.go:143] virtualization: kvm guest
	I1208 23:04:01.851996  749871 out.go:179] * [addons-192260] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 23:04:01.853230  749871 out.go:179]   - MINIKUBE_LOCATION=22075
	I1208 23:04:01.853273  749871 notify.go:221] Checking for updates...
	I1208 23:04:01.855325  749871 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:04:01.856654  749871 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:04:01.857817  749871 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:04:01.858872  749871 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 23:04:01.860022  749871 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 23:04:01.861387  749871 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:04:01.894406  749871 out.go:179] * Using the kvm2 driver based on user configuration
	I1208 23:04:01.895449  749871 start.go:309] selected driver: kvm2
	I1208 23:04:01.895468  749871 start.go:927] validating driver "kvm2" against <nil>
	I1208 23:04:01.895484  749871 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 23:04:01.896569  749871 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 23:04:01.896957  749871 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 23:04:01.896995  749871 cni.go:84] Creating CNI manager for ""
	I1208 23:04:01.897105  749871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:04:01.897119  749871 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 23:04:01.897171  749871 start.go:353] cluster config:
	{Name:addons-192260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-192260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1208 23:04:01.897298  749871 iso.go:125] acquiring lock: {Name:mk3f3df5ef11b93dcc62a5800b46f2775cc6cbb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 23:04:01.898608  749871 out.go:179] * Starting "addons-192260" primary control-plane node in "addons-192260" cluster
	I1208 23:04:01.899636  749871 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 23:04:01.899665  749871 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1208 23:04:01.899674  749871 cache.go:65] Caching tarball of preloaded images
	I1208 23:04:01.899768  749871 preload.go:238] Found /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1208 23:04:01.899781  749871 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 23:04:01.900172  749871 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/config.json ...
	I1208 23:04:01.900202  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/config.json: {Name:mk0a31764cfde7d5eb993d4c32bb79991d64923e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:01.900378  749871 start.go:360] acquireMachinesLock for addons-192260: {Name:mk9f5a36f0f03c819637fd3ede2b02dca808c533 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1208 23:04:01.900462  749871 start.go:364] duration metric: took 64.998µs to acquireMachinesLock for "addons-192260"
	I1208 23:04:01.900484  749871 start.go:93] Provisioning new machine with config: &{Name:addons-192260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-192260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 23:04:01.900546  749871 start.go:125] createHost starting for "" (driver="kvm2")
	I1208 23:04:01.901831  749871 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1208 23:04:01.902003  749871 start.go:159] libmachine.API.Create for "addons-192260" (driver="kvm2")
	I1208 23:04:01.902041  749871 client.go:173] LocalClient.Create starting
	I1208 23:04:01.902139  749871 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem
	I1208 23:04:01.985298  749871 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem
	I1208 23:04:02.160780  749871 main.go:143] libmachine: creating domain...
	I1208 23:04:02.160805  749871 main.go:143] libmachine: creating network...
	I1208 23:04:02.162234  749871 main.go:143] libmachine: found existing default network
	I1208 23:04:02.162432  749871 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1208 23:04:02.162992  749871 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001de4780}
	I1208 23:04:02.163113  749871 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-192260</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1208 23:04:02.169132  749871 main.go:143] libmachine: creating private network mk-addons-192260 192.168.39.0/24...
	I1208 23:04:02.246536  749871 main.go:143] libmachine: private network mk-addons-192260 192.168.39.0/24 created
	I1208 23:04:02.246818  749871 main.go:143] libmachine: <network>
	  <name>mk-addons-192260</name>
	  <uuid>2247a2a9-4733-457e-a13c-7ebdf6392f24</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:6d:e2:c2'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1208 23:04:02.246853  749871 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260 ...
	I1208 23:04:02.246877  749871 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22075-744871/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1208 23:04:02.246888  749871 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:04:02.246961  749871 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22075-744871/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22075-744871/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1208 23:04:02.580125  749871 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa...
	I1208 23:04:02.708875  749871 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/addons-192260.rawdisk...
	I1208 23:04:02.708926  749871 main.go:143] libmachine: Writing magic tar header
	I1208 23:04:02.708972  749871 main.go:143] libmachine: Writing SSH key tar header
	I1208 23:04:02.709057  749871 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260 ...
	I1208 23:04:02.709123  749871 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260
	I1208 23:04:02.709169  749871 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260 (perms=drwx------)
	I1208 23:04:02.709190  749871 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube/machines
	I1208 23:04:02.709200  749871 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube/machines (perms=drwxr-xr-x)
	I1208 23:04:02.709214  749871 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:04:02.709225  749871 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube (perms=drwxr-xr-x)
	I1208 23:04:02.709243  749871 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871
	I1208 23:04:02.709254  749871 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871 (perms=drwxrwxr-x)
	I1208 23:04:02.709264  749871 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1208 23:04:02.709274  749871 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1208 23:04:02.709283  749871 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1208 23:04:02.709293  749871 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1208 23:04:02.709301  749871 main.go:143] libmachine: checking permissions on dir: /home
	I1208 23:04:02.709320  749871 main.go:143] libmachine: skipping /home - not owner
	I1208 23:04:02.709328  749871 main.go:143] libmachine: defining domain...
	I1208 23:04:02.710568  749871 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-192260</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/addons-192260.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-192260'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1208 23:04:02.717618  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:35:67:b6 in network default
	I1208 23:04:02.718219  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:02.718236  749871 main.go:143] libmachine: starting domain...
	I1208 23:04:02.718240  749871 main.go:143] libmachine: ensuring networks are active...
	I1208 23:04:02.719021  749871 main.go:143] libmachine: Ensuring network default is active
	I1208 23:04:02.719489  749871 main.go:143] libmachine: Ensuring network mk-addons-192260 is active
	I1208 23:04:02.720110  749871 main.go:143] libmachine: getting domain XML...
	I1208 23:04:02.721167  749871 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-192260</name>
	  <uuid>4948afaf-de9e-4c6c-8df5-d1bc42d49810</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/addons-192260.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:c0:a1:f5'/>
	      <source network='mk-addons-192260'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:35:67:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1208 23:04:04.052424  749871 main.go:143] libmachine: waiting for domain to start...
	I1208 23:04:04.053787  749871 main.go:143] libmachine: domain is now running
	I1208 23:04:04.053806  749871 main.go:143] libmachine: waiting for IP...
	I1208 23:04:04.054621  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:04.055074  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:04.055085  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:04.055393  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:04.055444  749871 retry.go:31] will retry after 204.830888ms: waiting for domain to come up
	I1208 23:04:04.261891  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:04.262426  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:04.262447  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:04.262729  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:04.262776  749871 retry.go:31] will retry after 390.230331ms: waiting for domain to come up
	I1208 23:04:04.654497  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:04.655134  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:04.655157  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:04.655559  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:04.655605  749871 retry.go:31] will retry after 302.231765ms: waiting for domain to come up
	I1208 23:04:04.959187  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:04.959705  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:04.959722  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:04.960026  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:04.960067  749871 retry.go:31] will retry after 587.136437ms: waiting for domain to come up
	I1208 23:04:05.548811  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:05.549305  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:05.549323  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:05.549618  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:05.549663  749871 retry.go:31] will retry after 458.621588ms: waiting for domain to come up
	I1208 23:04:06.010346  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:06.010942  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:06.010956  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:06.011245  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:06.011282  749871 retry.go:31] will retry after 933.702149ms: waiting for domain to come up
	I1208 23:04:06.946549  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:06.947083  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:06.947095  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:06.947406  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:06.947445  749871 retry.go:31] will retry after 1.010878856s: waiting for domain to come up
	I1208 23:04:07.959932  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:07.960564  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:07.960580  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:07.960878  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:07.960921  749871 retry.go:31] will retry after 1.401755429s: waiting for domain to come up
	I1208 23:04:09.364456  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:09.364996  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:09.365010  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:09.365286  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:09.365324  749871 retry.go:31] will retry after 1.502750843s: waiting for domain to come up
	I1208 23:04:10.870204  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:10.870819  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:10.870835  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:10.871172  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:10.871231  749871 retry.go:31] will retry after 1.548389443s: waiting for domain to come up
	I1208 23:04:12.421037  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:12.421751  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:12.421769  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:12.422074  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:12.422114  749871 retry.go:31] will retry after 2.787917731s: waiting for domain to come up
	I1208 23:04:15.213344  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:15.214062  749871 main.go:143] libmachine: no network interface addresses found for domain addons-192260 (source=lease)
	I1208 23:04:15.214089  749871 main.go:143] libmachine: trying to list again with source=arp
	I1208 23:04:15.214442  749871 main.go:143] libmachine: unable to find current IP address of domain addons-192260 in network mk-addons-192260 (interfaces detected: [])
	I1208 23:04:15.214501  749871 retry.go:31] will retry after 2.268048212s: waiting for domain to come up
	I1208 23:04:17.484647  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:17.485238  749871 main.go:143] libmachine: domain addons-192260 has current primary IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:17.485251  749871 main.go:143] libmachine: found domain IP: 192.168.39.248
	I1208 23:04:17.485258  749871 main.go:143] libmachine: reserving static IP address...
	I1208 23:04:17.485657  749871 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-192260", mac: "52:54:00:c0:a1:f5", ip: "192.168.39.248"} in network mk-addons-192260
	I1208 23:04:17.678038  749871 main.go:143] libmachine: reserved static IP address 192.168.39.248 for domain addons-192260
	I1208 23:04:17.678070  749871 main.go:143] libmachine: waiting for SSH...
	I1208 23:04:17.678079  749871 main.go:143] libmachine: Getting to WaitForSSH function...
	I1208 23:04:17.681074  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:17.681543  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:17.681591  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:17.681812  749871 main.go:143] libmachine: Using SSH client type: native
	I1208 23:04:17.682129  749871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1208 23:04:17.682146  749871 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1208 23:04:17.799040  749871 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 23:04:17.799498  749871 main.go:143] libmachine: domain creation complete
	I1208 23:04:17.801154  749871 machine.go:94] provisionDockerMachine start ...
	I1208 23:04:17.803677  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:17.804121  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:17.804149  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:17.804388  749871 main.go:143] libmachine: Using SSH client type: native
	I1208 23:04:17.804633  749871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1208 23:04:17.804647  749871 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 23:04:17.919920  749871 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1208 23:04:17.919949  749871 buildroot.go:166] provisioning hostname "addons-192260"
	I1208 23:04:17.922837  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:17.923262  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:17.923285  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:17.923457  749871 main.go:143] libmachine: Using SSH client type: native
	I1208 23:04:17.923723  749871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1208 23:04:17.923739  749871 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-192260 && echo "addons-192260" | sudo tee /etc/hostname
	I1208 23:04:18.056404  749871 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-192260
	
	I1208 23:04:18.059855  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.060262  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:18.060295  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.060515  749871 main.go:143] libmachine: Using SSH client type: native
	I1208 23:04:18.060715  749871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1208 23:04:18.060729  749871 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-192260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-192260/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-192260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 23:04:18.190259  749871 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 23:04:18.190307  749871 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1208 23:04:18.190342  749871 buildroot.go:174] setting up certificates
	I1208 23:04:18.190358  749871 provision.go:84] configureAuth start
	I1208 23:04:18.193453  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.193915  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:18.193945  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.196316  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.196685  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:18.196708  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.196827  749871 provision.go:143] copyHostCerts
	I1208 23:04:18.196910  749871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1208 23:04:18.197039  749871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1208 23:04:18.197164  749871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1208 23:04:18.197223  749871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.addons-192260 san=[127.0.0.1 192.168.39.248 addons-192260 localhost minikube]
	I1208 23:04:18.357653  749871 provision.go:177] copyRemoteCerts
	I1208 23:04:18.357719  749871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 23:04:18.360181  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.360565  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:18.360587  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.360718  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:18.452238  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 23:04:18.483333  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1208 23:04:18.513964  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 23:04:18.544416  749871 provision.go:87] duration metric: took 354.015787ms to configureAuth
	I1208 23:04:18.544448  749871 buildroot.go:189] setting minikube options for container-runtime
	I1208 23:04:18.544649  749871 config.go:182] Loaded profile config "addons-192260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:04:18.547589  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.547982  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:18.548007  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.548197  749871 main.go:143] libmachine: Using SSH client type: native
	I1208 23:04:18.548422  749871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1208 23:04:18.548441  749871 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 23:04:18.801923  749871 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 23:04:18.801953  749871 machine.go:97] duration metric: took 1.000775854s to provisionDockerMachine
	I1208 23:04:18.801964  749871 client.go:176] duration metric: took 16.899914678s to LocalClient.Create
	I1208 23:04:18.801995  749871 start.go:167] duration metric: took 16.899989976s to libmachine.API.Create "addons-192260"
	I1208 23:04:18.802007  749871 start.go:293] postStartSetup for "addons-192260" (driver="kvm2")
	I1208 23:04:18.802024  749871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 23:04:18.802111  749871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 23:04:18.804969  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.805413  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:18.805438  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.805660  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:18.895539  749871 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 23:04:18.900592  749871 info.go:137] Remote host: Buildroot 2025.02
	I1208 23:04:18.900630  749871 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1208 23:04:18.900707  749871 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1208 23:04:18.900731  749871 start.go:296] duration metric: took 98.713772ms for postStartSetup
	I1208 23:04:18.904019  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.904417  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:18.904462  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.904726  749871 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/config.json ...
	I1208 23:04:18.904930  749871 start.go:128] duration metric: took 17.004372113s to createHost
	I1208 23:04:18.907886  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.908349  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:18.908398  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:18.908621  749871 main.go:143] libmachine: Using SSH client type: native
	I1208 23:04:18.908874  749871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1208 23:04:18.908887  749871 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1208 23:04:19.026986  749871 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765235058.980308329
	
	I1208 23:04:19.027025  749871 fix.go:216] guest clock: 1765235058.980308329
	I1208 23:04:19.027035  749871 fix.go:229] Guest: 2025-12-08 23:04:18.980308329 +0000 UTC Remote: 2025-12-08 23:04:18.904943093 +0000 UTC m=+17.111083461 (delta=75.365236ms)
	I1208 23:04:19.027058  749871 fix.go:200] guest clock delta is within tolerance: 75.365236ms
	I1208 23:04:19.027064  749871 start.go:83] releasing machines lock for "addons-192260", held for 17.126590983s
	I1208 23:04:19.030213  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:19.030749  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:19.030777  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:19.031331  749871 ssh_runner.go:195] Run: cat /version.json
	I1208 23:04:19.031444  749871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 23:04:19.035312  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:19.035351  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:19.036664  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:19.036700  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:19.036730  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:19.036771  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:19.036888  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:19.037102  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:19.146715  749871 ssh_runner.go:195] Run: systemctl --version
	I1208 23:04:19.152970  749871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 23:04:19.311240  749871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 23:04:19.318946  749871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 23:04:19.319029  749871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 23:04:19.346268  749871 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 23:04:19.346298  749871 start.go:496] detecting cgroup driver to use...
	I1208 23:04:19.346397  749871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 23:04:19.376775  749871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 23:04:19.399563  749871 docker.go:218] disabling cri-docker service (if available) ...
	I1208 23:04:19.399651  749871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 23:04:19.419157  749871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 23:04:19.436416  749871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 23:04:19.584588  749871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 23:04:19.798115  749871 docker.go:234] disabling docker service ...
	I1208 23:04:19.798221  749871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 23:04:19.815158  749871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 23:04:19.831538  749871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 23:04:19.987076  749871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 23:04:20.128585  749871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 23:04:20.144905  749871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 23:04:20.169335  749871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 23:04:20.169422  749871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:04:20.182777  749871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 23:04:20.182847  749871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:04:20.196172  749871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:04:20.209537  749871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:04:20.222659  749871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 23:04:20.236420  749871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:04:20.249180  749871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:04:20.270844  749871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:04:20.297816  749871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 23:04:20.308810  749871 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1208 23:04:20.308885  749871 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1208 23:04:20.332190  749871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 23:04:20.347020  749871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 23:04:20.496948  749871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 23:04:20.617007  749871 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 23:04:20.617128  749871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 23:04:20.622970  749871 start.go:564] Will wait 60s for crictl version
	I1208 23:04:20.623072  749871 ssh_runner.go:195] Run: which crictl
	I1208 23:04:20.627494  749871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 23:04:20.661834  749871 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1208 23:04:20.661944  749871 ssh_runner.go:195] Run: crio --version
	I1208 23:04:20.692859  749871 ssh_runner.go:195] Run: crio --version
	I1208 23:04:20.724495  749871 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1208 23:04:20.728482  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:20.728869  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:20.728894  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:20.729097  749871 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1208 23:04:20.733750  749871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 23:04:20.749337  749871 kubeadm.go:884] updating cluster {Name:addons-192260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-192260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 23:04:20.749495  749871 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 23:04:20.749553  749871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 23:04:20.779777  749871 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1208 23:04:20.779863  749871 ssh_runner.go:195] Run: which lz4
	I1208 23:04:20.784245  749871 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1208 23:04:20.788981  749871 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1208 23:04:20.789039  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1208 23:04:22.090114  749871 crio.go:462] duration metric: took 1.305900473s to copy over tarball
	I1208 23:04:22.090211  749871 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1208 23:04:23.586471  749871 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.496207976s)
	I1208 23:04:23.586516  749871 crio.go:469] duration metric: took 1.496362051s to extract the tarball
	I1208 23:04:23.586527  749871 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1208 23:04:23.626325  749871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 23:04:23.667025  749871 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 23:04:23.667056  749871 cache_images.go:86] Images are preloaded, skipping loading
	I1208 23:04:23.667065  749871 kubeadm.go:935] updating node { 192.168.39.248 8443 v1.34.2 crio true true} ...
	I1208 23:04:23.667186  749871 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-192260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-192260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 23:04:23.667283  749871 ssh_runner.go:195] Run: crio config
	I1208 23:04:23.716530  749871 cni.go:84] Creating CNI manager for ""
	I1208 23:04:23.716562  749871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:04:23.716582  749871 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 23:04:23.716607  749871 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-192260 NodeName:addons-192260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 23:04:23.716746  749871 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-192260"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.248"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 23:04:23.716827  749871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 23:04:23.729306  749871 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 23:04:23.729426  749871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 23:04:23.742173  749871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1208 23:04:23.764252  749871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 23:04:23.785909  749871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1208 23:04:23.807385  749871 ssh_runner.go:195] Run: grep 192.168.39.248	control-plane.minikube.internal$ /etc/hosts
	I1208 23:04:23.811975  749871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 23:04:23.827542  749871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 23:04:23.973466  749871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 23:04:24.004921  749871 certs.go:69] Setting up /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260 for IP: 192.168.39.248
	I1208 23:04:24.004957  749871 certs.go:195] generating shared ca certs ...
	I1208 23:04:24.004982  749871 certs.go:227] acquiring lock for ca certs: {Name:mk069bbba4d83d251409b18022ca36eb869d942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.005172  749871 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key
	I1208 23:04:24.112210  749871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt ...
	I1208 23:04:24.112250  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt: {Name:mk9c0cf8604884680f7544c5a6a9412f24bad0ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.112454  749871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key ...
	I1208 23:04:24.112468  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key: {Name:mk01f34e5740e19e9fb0f42b460073e650e7a7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.112595  749871 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key
	I1208 23:04:24.310349  749871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt ...
	I1208 23:04:24.310397  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt: {Name:mk78fa58742a9cab6d3f7c22e13e0bc11d08a127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.310584  749871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key ...
	I1208 23:04:24.310596  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key: {Name:mk84984d6dbc90cb26143d146b97b8e1d7b8ac1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.310667  749871 certs.go:257] generating profile certs ...
	I1208 23:04:24.310729  749871 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.key
	I1208 23:04:24.310743  749871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt with IP's: []
	I1208 23:04:24.522956  749871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt ...
	I1208 23:04:24.523000  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: {Name:mk5cf6708ad9babbd59ebab551ed37739fc155e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.523227  749871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.key ...
	I1208 23:04:24.523253  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.key: {Name:mk51c799d776ff2c201a05bcda48be1fea6ef6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.523378  749871 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.key.85d09812
	I1208 23:04:24.523405  749871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.crt.85d09812 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.248]
	I1208 23:04:24.558124  749871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.crt.85d09812 ...
	I1208 23:04:24.558163  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.crt.85d09812: {Name:mkc04723d8b3e794335045d8833750e07eb60e8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.558386  749871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.key.85d09812 ...
	I1208 23:04:24.558406  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.key.85d09812: {Name:mk9d3cc8063482a4e9e741ee1079a7d77cc37fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.558518  749871 certs.go:382] copying /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.crt.85d09812 -> /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.crt
	I1208 23:04:24.558650  749871 certs.go:386] copying /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.key.85d09812 -> /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.key
	I1208 23:04:24.558730  749871 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/proxy-client.key
	I1208 23:04:24.558758  749871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/proxy-client.crt with IP's: []
	I1208 23:04:24.658349  749871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/proxy-client.crt ...
	I1208 23:04:24.658402  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/proxy-client.crt: {Name:mke928fded4ec3d0bebc3bd9925fc0c75c51e483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.658623  749871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/proxy-client.key ...
	I1208 23:04:24.658645  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/proxy-client.key: {Name:mkd44daed3c222cc3c7390ca3e1e216469f08ae7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:24.658859  749871 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 23:04:24.658912  749871 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem (1082 bytes)
	I1208 23:04:24.658951  749871 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem (1123 bytes)
	I1208 23:04:24.658993  749871 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem (1675 bytes)
	I1208 23:04:24.659627  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 23:04:24.701162  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 23:04:24.737854  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 23:04:24.771506  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1208 23:04:24.802099  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1208 23:04:24.832284  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 23:04:24.863846  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 23:04:24.895396  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 23:04:24.926627  749871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 23:04:24.958107  749871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 23:04:24.980274  749871 ssh_runner.go:195] Run: openssl version
	I1208 23:04:24.987014  749871 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 23:04:24.999220  749871 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 23:04:25.011766  749871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 23:04:25.017769  749871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I1208 23:04:25.017844  749871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 23:04:25.026037  749871 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 23:04:25.040790  749871 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 23:04:25.053599  749871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 23:04:25.058947  749871 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 23:04:25.059026  749871 kubeadm.go:401] StartCluster: {Name:addons-192260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-192260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:04:25.059127  749871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 23:04:25.059194  749871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 23:04:25.094818  749871 cri.go:89] found id: ""
	I1208 23:04:25.094898  749871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 23:04:25.108007  749871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 23:04:25.120732  749871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 23:04:25.133412  749871 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 23:04:25.133440  749871 kubeadm.go:158] found existing configuration files:
	
	I1208 23:04:25.133514  749871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 23:04:25.145196  749871 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 23:04:25.145298  749871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 23:04:25.157607  749871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 23:04:25.169325  749871 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 23:04:25.169433  749871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 23:04:25.181791  749871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 23:04:25.194534  749871 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 23:04:25.194603  749871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 23:04:25.207165  749871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 23:04:25.218996  749871 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 23:04:25.219109  749871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 23:04:25.231571  749871 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1208 23:04:25.383072  749871 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 23:04:36.958309  749871 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 23:04:36.958435  749871 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 23:04:36.958557  749871 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 23:04:36.958716  749871 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 23:04:36.958851  749871 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 23:04:36.958942  749871 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 23:04:36.960531  749871 out.go:252]   - Generating certificates and keys ...
	I1208 23:04:36.960609  749871 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 23:04:36.960669  749871 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 23:04:36.960776  749871 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 23:04:36.960876  749871 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 23:04:36.960967  749871 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 23:04:36.961036  749871 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 23:04:36.961099  749871 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 23:04:36.961295  749871 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-192260 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I1208 23:04:36.961395  749871 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 23:04:36.961526  749871 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-192260 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I1208 23:04:36.961627  749871 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 23:04:36.961728  749871 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 23:04:36.961792  749871 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 23:04:36.961856  749871 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 23:04:36.961933  749871 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 23:04:36.962007  749871 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 23:04:36.962090  749871 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 23:04:36.962184  749871 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 23:04:36.962277  749871 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 23:04:36.962403  749871 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 23:04:36.962480  749871 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 23:04:36.964693  749871 out.go:252]   - Booting up control plane ...
	I1208 23:04:36.964794  749871 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 23:04:36.964885  749871 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 23:04:36.964975  749871 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 23:04:36.965100  749871 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 23:04:36.965227  749871 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 23:04:36.965403  749871 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 23:04:36.965530  749871 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 23:04:36.965601  749871 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 23:04:36.965722  749871 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 23:04:36.965844  749871 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 23:04:36.965927  749871 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002111363s
	I1208 23:04:36.966040  749871 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 23:04:36.966147  749871 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.248:8443/livez
	I1208 23:04:36.966287  749871 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 23:04:36.966425  749871 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 23:04:36.966546  749871 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508664255s
	I1208 23:04:36.966648  749871 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.105708555s
	I1208 23:04:36.966719  749871 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001768323s
	I1208 23:04:36.966812  749871 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 23:04:36.966958  749871 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 23:04:36.967014  749871 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 23:04:36.967176  749871 kubeadm.go:319] [mark-control-plane] Marking the node addons-192260 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 23:04:36.967235  749871 kubeadm.go:319] [bootstrap-token] Using token: etosue.ub36uggpfp8xud8w
	I1208 23:04:36.968496  749871 out.go:252]   - Configuring RBAC rules ...
	I1208 23:04:36.968655  749871 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 23:04:36.968780  749871 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 23:04:36.968989  749871 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 23:04:36.969185  749871 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 23:04:36.969333  749871 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 23:04:36.969475  749871 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 23:04:36.969629  749871 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 23:04:36.969696  749871 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 23:04:36.969760  749871 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 23:04:36.969777  749871 kubeadm.go:319] 
	I1208 23:04:36.969864  749871 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 23:04:36.969875  749871 kubeadm.go:319] 
	I1208 23:04:36.969982  749871 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 23:04:36.969992  749871 kubeadm.go:319] 
	I1208 23:04:36.970038  749871 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 23:04:36.970118  749871 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 23:04:36.970180  749871 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 23:04:36.970187  749871 kubeadm.go:319] 
	I1208 23:04:36.970229  749871 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 23:04:36.970234  749871 kubeadm.go:319] 
	I1208 23:04:36.970299  749871 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 23:04:36.970310  749871 kubeadm.go:319] 
	I1208 23:04:36.970397  749871 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 23:04:36.970504  749871 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 23:04:36.970597  749871 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 23:04:36.970606  749871 kubeadm.go:319] 
	I1208 23:04:36.970686  749871 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 23:04:36.970802  749871 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 23:04:36.970820  749871 kubeadm.go:319] 
	I1208 23:04:36.970945  749871 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token etosue.ub36uggpfp8xud8w \
	I1208 23:04:36.971061  749871 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b505ea1d51a5916e1e34daedc053d9e1cdc4c18fb7af3859a1471c943bb62a6a \
	I1208 23:04:36.971082  749871 kubeadm.go:319] 	--control-plane 
	I1208 23:04:36.971091  749871 kubeadm.go:319] 
	I1208 23:04:36.971159  749871 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 23:04:36.971165  749871 kubeadm.go:319] 
	I1208 23:04:36.971235  749871 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token etosue.ub36uggpfp8xud8w \
	I1208 23:04:36.971376  749871 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b505ea1d51a5916e1e34daedc053d9e1cdc4c18fb7af3859a1471c943bb62a6a 
	I1208 23:04:36.971396  749871 cni.go:84] Creating CNI manager for ""
	I1208 23:04:36.971404  749871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:04:36.973614  749871 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1208 23:04:36.974799  749871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1208 23:04:36.988326  749871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1208 23:04:37.014522  749871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 23:04:37.014580  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:37.014621  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-192260 minikube.k8s.io/updated_at=2025_12_08T23_04_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2846307350d09469fc6b6b47dd0c4837fa740d9c minikube.k8s.io/name=addons-192260 minikube.k8s.io/primary=true
	I1208 23:04:37.054811  749871 ops.go:34] apiserver oom_adj: -16
	I1208 23:04:37.176238  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:37.676591  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:38.176897  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:38.677336  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:39.177296  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:39.677299  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:40.176751  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:40.677128  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:41.177345  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:41.676809  749871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 23:04:41.776026  749871 kubeadm.go:1114] duration metric: took 4.761503173s to wait for elevateKubeSystemPrivileges
	I1208 23:04:41.776098  749871 kubeadm.go:403] duration metric: took 16.717078225s to StartCluster
	I1208 23:04:41.776130  749871 settings.go:142] acquiring lock: {Name:mk01a7d116accfccda14c363bded9d7c0216d454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:41.776318  749871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:04:41.776862  749871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/kubeconfig: {Name:mk0db57d03f858808a26818547681e8d59b0a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:04:41.777136  749871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 23:04:41.777214  749871 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 23:04:41.777311  749871 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1208 23:04:41.777469  749871 addons.go:70] Setting yakd=true in profile "addons-192260"
	I1208 23:04:41.777491  749871 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-192260"
	I1208 23:04:41.777513  749871 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-192260"
	I1208 23:04:41.777520  749871 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-192260"
	I1208 23:04:41.777525  749871 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-192260"
	I1208 23:04:41.777491  749871 addons.go:70] Setting metrics-server=true in profile "addons-192260"
	I1208 23:04:41.777534  749871 addons.go:70] Setting ingress-dns=true in profile "addons-192260"
	I1208 23:04:41.777555  749871 addons.go:239] Setting addon metrics-server=true in "addons-192260"
	I1208 23:04:41.777567  749871 addons.go:70] Setting registry=true in profile "addons-192260"
	I1208 23:04:41.777569  749871 addons.go:70] Setting ingress=true in profile "addons-192260"
	I1208 23:04:41.777571  749871 addons.go:70] Setting inspektor-gadget=true in profile "addons-192260"
	I1208 23:04:41.777569  749871 addons.go:70] Setting gcp-auth=true in profile "addons-192260"
	I1208 23:04:41.777585  749871 addons.go:70] Setting registry-creds=true in profile "addons-192260"
	I1208 23:04:41.777594  749871 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-192260"
	I1208 23:04:41.777598  749871 addons.go:239] Setting addon inspektor-gadget=true in "addons-192260"
	I1208 23:04:41.777603  749871 addons.go:239] Setting addon registry-creds=true in "addons-192260"
	I1208 23:04:41.777612  749871 mustload.go:66] Loading cluster: addons-192260
	I1208 23:04:41.777756  749871 config.go:182] Loaded profile config "addons-192260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:04:41.777498  749871 addons.go:239] Setting addon yakd=true in "addons-192260"
	I1208 23:04:41.777826  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777558  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777622  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777560  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777961  749871 config.go:182] Loaded profile config "addons-192260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:04:41.777559  749871 addons.go:239] Setting addon ingress-dns=true in "addons-192260"
	I1208 23:04:41.778091  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777578  749871 addons.go:239] Setting addon registry=true in "addons-192260"
	I1208 23:04:41.778637  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777580  749871 addons.go:239] Setting addon ingress=true in "addons-192260"
	I1208 23:04:41.778931  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777623  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.779225  749871 out.go:179] * Verifying Kubernetes components...
	I1208 23:04:41.777626  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777630  749871 addons.go:70] Setting storage-provisioner=true in profile "addons-192260"
	I1208 23:04:41.779442  749871 addons.go:239] Setting addon storage-provisioner=true in "addons-192260"
	I1208 23:04:41.779486  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777631  749871 addons.go:70] Setting default-storageclass=true in profile "addons-192260"
	I1208 23:04:41.779663  749871 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-192260"
	I1208 23:04:41.777634  749871 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-192260"
	I1208 23:04:41.779739  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777638  749871 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-192260"
	I1208 23:04:41.780209  749871 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-192260"
	I1208 23:04:41.777642  749871 addons.go:70] Setting volumesnapshots=true in profile "addons-192260"
	I1208 23:04:41.780599  749871 addons.go:239] Setting addon volumesnapshots=true in "addons-192260"
	I1208 23:04:41.780631  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.777643  749871 addons.go:70] Setting volcano=true in profile "addons-192260"
	I1208 23:04:41.780895  749871 addons.go:239] Setting addon volcano=true in "addons-192260"
	I1208 23:04:41.780928  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.780941  749871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 23:04:41.777471  749871 addons.go:70] Setting cloud-spanner=true in profile "addons-192260"
	I1208 23:04:41.781115  749871 addons.go:239] Setting addon cloud-spanner=true in "addons-192260"
	I1208 23:04:41.781148  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.784416  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.785840  749871 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1208 23:04:41.785871  749871 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1208 23:04:41.785873  749871 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1208 23:04:41.785897  749871 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1208 23:04:41.787447  749871 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1208 23:04:41.787513  749871 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1208 23:04:41.787537  749871 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 23:04:41.787522  749871 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1208 23:04:41.787574  749871 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1208 23:04:41.787542  749871 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1208 23:04:41.787601  749871 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1208 23:04:41.788011  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1208 23:04:41.787553  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1208 23:04:41.787456  749871 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1208 23:04:41.787814  749871 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 23:04:41.788810  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1208 23:04:41.789175  749871 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1208 23:04:41.789606  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1208 23:04:41.789246  749871 addons.go:239] Setting addon default-storageclass=true in "addons-192260"
	I1208 23:04:41.789761  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.789959  749871 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1208 23:04:41.789980  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1208 23:04:41.790106  749871 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 23:04:41.790087  749871 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	W1208 23:04:41.789832  749871 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1208 23:04:41.790154  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1208 23:04:41.790500  749871 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-192260"
	I1208 23:04:41.791263  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:41.790575  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1208 23:04:41.790622  749871 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1208 23:04:41.791543  749871 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1208 23:04:41.791569  749871 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 23:04:41.792005  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 23:04:41.792827  749871 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1208 23:04:41.792840  749871 out.go:179]   - Using image docker.io/registry:3.0.0
	I1208 23:04:41.792844  749871 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1208 23:04:41.793556  749871 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1208 23:04:41.793583  749871 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1208 23:04:41.793591  749871 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1208 23:04:41.793645  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1208 23:04:41.794311  749871 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1208 23:04:41.794332  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1208 23:04:41.794997  749871 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1208 23:04:41.795029  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1208 23:04:41.795357  749871 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 23:04:41.795391  749871 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 23:04:41.796334  749871 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 23:04:41.796355  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1208 23:04:41.797639  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1208 23:04:41.797680  749871 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1208 23:04:41.798980  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1208 23:04:41.799243  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.799870  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.800239  749871 out.go:179]   - Using image docker.io/busybox:stable
	I1208 23:04:41.800863  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.800901  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.801132  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1208 23:04:41.801328  749871 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 23:04:41.801344  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1208 23:04:41.801376  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.801511  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.801544  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.801676  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.802439  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.802483  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.802658  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.802968  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.803573  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.803614  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.803746  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1208 23:04:41.803812  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.803839  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.803953  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.804397  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.804443  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.804499  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.804724  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.804986  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.805018  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.805522  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.805659  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.805692  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.805843  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.806387  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1208 23:04:41.806653  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.806872  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.807100  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.807992  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.808094  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.808125  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.808246  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.808294  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.808337  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.808571  749871 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1208 23:04:41.808839  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.808866  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.809088  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.809521  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.809585  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.809620  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.809652  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.809666  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.809915  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.809927  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.809984  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.810039  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.810425  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.810449  749871 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1208 23:04:41.810464  749871 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1208 23:04:41.810642  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.810678  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.811002  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.811374  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.811728  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.811763  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.811931  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:41.813739  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.814205  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:41.814239  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:41.814461  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	W1208 23:04:41.967226  749871 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40796->192.168.39.248:22: read: connection reset by peer
	I1208 23:04:41.967272  749871 retry.go:31] will retry after 160.961298ms: ssh: handshake failed: read tcp 192.168.39.1:40796->192.168.39.248:22: read: connection reset by peer
	I1208 23:04:42.403716  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1208 23:04:42.637941  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 23:04:42.665884  749871 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1208 23:04:42.665917  749871 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1208 23:04:42.750649  749871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 23:04:42.750668  749871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 23:04:42.795181  749871 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1208 23:04:42.795212  749871 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1208 23:04:42.862105  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1208 23:04:42.876093  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 23:04:42.880576  749871 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1208 23:04:42.880594  749871 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1208 23:04:42.887672  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 23:04:42.955972  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 23:04:43.043170  749871 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1208 23:04:43.043194  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1208 23:04:43.046501  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 23:04:43.056643  749871 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1208 23:04:43.056700  749871 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1208 23:04:43.198204  749871 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1208 23:04:43.198236  749871 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1208 23:04:43.224274  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1208 23:04:43.301015  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1208 23:04:43.311272  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 23:04:43.546164  749871 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1208 23:04:43.546199  749871 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1208 23:04:43.598809  749871 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1208 23:04:43.598843  749871 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1208 23:04:43.702619  749871 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1208 23:04:43.702658  749871 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1208 23:04:43.719137  749871 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1208 23:04:43.719170  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1208 23:04:43.746696  749871 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1208 23:04:43.746722  749871 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1208 23:04:43.918629  749871 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1208 23:04:43.918682  749871 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1208 23:04:44.001694  749871 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 23:04:44.001735  749871 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1208 23:04:44.008620  749871 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1208 23:04:44.008651  749871 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1208 23:04:44.059653  749871 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1208 23:04:44.059677  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1208 23:04:44.085142  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1208 23:04:44.299736  749871 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1208 23:04:44.299780  749871 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1208 23:04:44.338830  749871 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1208 23:04:44.338866  749871 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1208 23:04:44.339862  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1208 23:04:44.372261  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 23:04:44.725071  749871 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1208 23:04:44.725121  749871 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1208 23:04:44.774197  749871 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 23:04:44.774226  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1208 23:04:45.209947  749871 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1208 23:04:45.209972  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1208 23:04:45.218512  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 23:04:45.598423  749871 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1208 23:04:45.598455  749871 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1208 23:04:45.928151  749871 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1208 23:04:45.928179  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1208 23:04:46.151625  749871 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1208 23:04:46.151662  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1208 23:04:46.422800  749871 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 23:04:46.422830  749871 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1208 23:04:46.882460  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 23:04:49.228661  749871 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1208 23:04:49.231706  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:49.232251  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:49.232283  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:49.232493  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:49.258083  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.620099249s)
	I1208 23:04:49.258165  749871 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.507461395s)
	I1208 23:04:49.258249  749871 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.507460362s)
	I1208 23:04:49.258284  749871 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1208 23:04:49.258319  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.396173832s)
	I1208 23:04:49.258398  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.382277939s)
	I1208 23:04:49.259156  749871 node_ready.go:35] waiting up to 6m0s for node "addons-192260" to be "Ready" ...
	I1208 23:04:49.260202  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.856447391s)
	I1208 23:04:49.342985  749871 node_ready.go:49] node "addons-192260" is "Ready"
	I1208 23:04:49.343024  749871 node_ready.go:38] duration metric: took 83.842681ms for node "addons-192260" to be "Ready" ...
	I1208 23:04:49.343047  749871 api_server.go:52] waiting for apiserver process to appear ...
	I1208 23:04:49.343115  749871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 23:04:49.421125  749871 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1208 23:04:49.499262  749871 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1208 23:04:49.564684  749871 addons.go:239] Setting addon gcp-auth=true in "addons-192260"
	I1208 23:04:49.564760  749871 host.go:66] Checking if "addons-192260" exists ...
	I1208 23:04:49.566953  749871 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1208 23:04:49.569649  749871 main.go:143] libmachine: domain addons-192260 has defined MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:49.570104  749871 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:a1:f5", ip: ""} in network mk-addons-192260: {Iface:virbr1 ExpiryTime:2025-12-09 00:04:17 +0000 UTC Type:0 Mac:52:54:00:c0:a1:f5 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-192260 Clientid:01:52:54:00:c0:a1:f5}
	I1208 23:04:49.570134  749871 main.go:143] libmachine: domain addons-192260 has defined IP address 192.168.39.248 and MAC address 52:54:00:c0:a1:f5 in network mk-addons-192260
	I1208 23:04:49.570305  749871 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/addons-192260/id_rsa Username:docker}
	I1208 23:04:49.889975  749871 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-192260" context rescaled to 1 replicas
	I1208 23:04:51.390579  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.502864592s)
	I1208 23:04:51.390632  749871 addons.go:495] Verifying addon ingress=true in "addons-192260"
	I1208 23:04:51.390656  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.434642238s)
	I1208 23:04:51.390769  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.166467497s)
	I1208 23:04:51.390724  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.344194767s)
	I1208 23:04:51.390824  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.089781141s)
	I1208 23:04:51.390895  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.079588446s)
	I1208 23:04:51.390953  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.305779304s)
	I1208 23:04:51.390982  749871 addons.go:495] Verifying addon registry=true in "addons-192260"
	I1208 23:04:51.391065  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.051157056s)
	I1208 23:04:51.391150  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.018849453s)
	I1208 23:04:51.392085  749871 addons.go:495] Verifying addon metrics-server=true in "addons-192260"
	I1208 23:04:51.391289  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.172733266s)
	W1208 23:04:51.392128  749871 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 23:04:51.392158  749871 retry.go:31] will retry after 202.471185ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 23:04:51.392509  749871 out.go:179] * Verifying ingress addon...
	I1208 23:04:51.392529  749871 out.go:179] * Verifying registry addon...
	I1208 23:04:51.393205  749871 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-192260 service yakd-dashboard -n yakd-dashboard
	
	I1208 23:04:51.395381  749871 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1208 23:04:51.395381  749871 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1208 23:04:51.407019  749871 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 23:04:51.407043  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:51.409883  749871 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1208 23:04:51.409916  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:51.595143  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 23:04:51.908802  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:51.908969  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:52.280842  749871 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.937692496s)
	I1208 23:04:52.280901  749871 api_server.go:72] duration metric: took 10.503644022s to wait for apiserver process to appear ...
	I1208 23:04:52.280911  749871 api_server.go:88] waiting for apiserver healthz status ...
	I1208 23:04:52.280938  749871 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I1208 23:04:52.280901  749871 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.713921134s)
	I1208 23:04:52.281027  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.398504754s)
	I1208 23:04:52.281067  749871 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-192260"
	I1208 23:04:52.282495  749871 out.go:179] * Verifying csi-hostpath-driver addon...
	I1208 23:04:52.282495  749871 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1208 23:04:52.284444  749871 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1208 23:04:52.285107  749871 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1208 23:04:52.285648  749871 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1208 23:04:52.285668  749871 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1208 23:04:52.289723  749871 api_server.go:279] https://192.168.39.248:8443/healthz returned 200:
	ok
	I1208 23:04:52.299382  749871 api_server.go:141] control plane version: v1.34.2
	I1208 23:04:52.299423  749871 api_server.go:131] duration metric: took 18.505448ms to wait for apiserver health ...
	I1208 23:04:52.299437  749871 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 23:04:52.313533  749871 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 23:04:52.313561  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:52.328134  749871 system_pods.go:59] 20 kube-system pods found
	I1208 23:04:52.328199  749871 system_pods.go:61] "amd-gpu-device-plugin-bn8cc" [6f327f75-80d6-49f4-9738-53ef956f000c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1208 23:04:52.328213  749871 system_pods.go:61] "coredns-66bc5c9577-4v7bp" [476274e3-e864-427e-a8e3-1137c244494f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 23:04:52.328225  749871 system_pods.go:61] "coredns-66bc5c9577-tfdh9" [57ca4c3e-4d53-497e-a949-caa33c176f0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 23:04:52.328240  749871 system_pods.go:61] "csi-hostpath-attacher-0" [d717d28f-a8e9-48a9-849b-d25ddb0d7ab3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1208 23:04:52.328246  749871 system_pods.go:61] "csi-hostpath-resizer-0" [e738d6b3-4e31-4bdf-8151-fbdae0c7d52d] Pending
	I1208 23:04:52.328261  749871 system_pods.go:61] "csi-hostpathplugin-gkmhh" [805df2ad-2fde-4a0c-8155-6e44952dadc3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1208 23:04:52.328268  749871 system_pods.go:61] "etcd-addons-192260" [5ee547ee-c108-4083-948b-fbae4eb8adb3] Running
	I1208 23:04:52.328275  749871 system_pods.go:61] "kube-apiserver-addons-192260" [eac8a830-0430-418f-8809-416edcc2f77c] Running
	I1208 23:04:52.328282  749871 system_pods.go:61] "kube-controller-manager-addons-192260" [2ec12a7f-68fb-4a9a-8823-4d44ce95dfbf] Running
	I1208 23:04:52.328292  749871 system_pods.go:61] "kube-ingress-dns-minikube" [799c1614-93a0-4a97-9610-3b9c3136c6fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1208 23:04:52.328298  749871 system_pods.go:61] "kube-proxy-tfs65" [189229ed-9887-4087-b678-fda9852fa12a] Running
	I1208 23:04:52.328304  749871 system_pods.go:61] "kube-scheduler-addons-192260" [8bd83a9a-1786-45b5-bab5-1b663e3db1aa] Running
	I1208 23:04:52.328311  749871 system_pods.go:61] "metrics-server-85b7d694d7-kgd8r" [89d19b47-85d3-4998-b247-410e617d840f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 23:04:52.328325  749871 system_pods.go:61] "nvidia-device-plugin-daemonset-zzn4k" [89aaba7e-70b7-4a68-b81c-78d0eca0b964] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 23:04:52.328352  749871 system_pods.go:61] "registry-6b586f9694-2ds54" [f2664081-e338-412b-893e-73fbe9c38553] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 23:04:52.328380  749871 system_pods.go:61] "registry-creds-764b6fb674-qjhd2" [8b21056d-50b4-4f41-ae47-ceccd213f5d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 23:04:52.328393  749871 system_pods.go:61] "registry-proxy-6nf92" [eb6ff610-2f42-4c13-82ff-ba9cea5c6601] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1208 23:04:52.328402  749871 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dmtj5" [136c8488-7b4d-43f4-a9be-da7b7403c0f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 23:04:52.328418  749871 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nv5mq" [b212e524-9eb0-49df-a095-121955ac40f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 23:04:52.328429  749871 system_pods.go:61] "storage-provisioner" [569ba41e-9894-4a5c-b180-27af073059ee] Running
	I1208 23:04:52.328439  749871 system_pods.go:74] duration metric: took 28.994083ms to wait for pod list to return data ...
	I1208 23:04:52.328464  749871 default_sa.go:34] waiting for default service account to be created ...
	I1208 23:04:52.362773  749871 default_sa.go:45] found service account: "default"
	I1208 23:04:52.362812  749871 default_sa.go:55] duration metric: took 34.339846ms for default service account to be created ...
	I1208 23:04:52.362827  749871 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 23:04:52.396638  749871 system_pods.go:86] 20 kube-system pods found
	I1208 23:04:52.396693  749871 system_pods.go:89] "amd-gpu-device-plugin-bn8cc" [6f327f75-80d6-49f4-9738-53ef956f000c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1208 23:04:52.396704  749871 system_pods.go:89] "coredns-66bc5c9577-4v7bp" [476274e3-e864-427e-a8e3-1137c244494f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 23:04:52.396716  749871 system_pods.go:89] "coredns-66bc5c9577-tfdh9" [57ca4c3e-4d53-497e-a949-caa33c176f0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 23:04:52.396724  749871 system_pods.go:89] "csi-hostpath-attacher-0" [d717d28f-a8e9-48a9-849b-d25ddb0d7ab3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1208 23:04:52.396730  749871 system_pods.go:89] "csi-hostpath-resizer-0" [e738d6b3-4e31-4bdf-8151-fbdae0c7d52d] Pending
	I1208 23:04:52.396741  749871 system_pods.go:89] "csi-hostpathplugin-gkmhh" [805df2ad-2fde-4a0c-8155-6e44952dadc3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1208 23:04:52.396750  749871 system_pods.go:89] "etcd-addons-192260" [5ee547ee-c108-4083-948b-fbae4eb8adb3] Running
	I1208 23:04:52.396757  749871 system_pods.go:89] "kube-apiserver-addons-192260" [eac8a830-0430-418f-8809-416edcc2f77c] Running
	I1208 23:04:52.396767  749871 system_pods.go:89] "kube-controller-manager-addons-192260" [2ec12a7f-68fb-4a9a-8823-4d44ce95dfbf] Running
	I1208 23:04:52.396776  749871 system_pods.go:89] "kube-ingress-dns-minikube" [799c1614-93a0-4a97-9610-3b9c3136c6fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1208 23:04:52.396787  749871 system_pods.go:89] "kube-proxy-tfs65" [189229ed-9887-4087-b678-fda9852fa12a] Running
	I1208 23:04:52.396793  749871 system_pods.go:89] "kube-scheduler-addons-192260" [8bd83a9a-1786-45b5-bab5-1b663e3db1aa] Running
	I1208 23:04:52.396806  749871 system_pods.go:89] "metrics-server-85b7d694d7-kgd8r" [89d19b47-85d3-4998-b247-410e617d840f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 23:04:52.396820  749871 system_pods.go:89] "nvidia-device-plugin-daemonset-zzn4k" [89aaba7e-70b7-4a68-b81c-78d0eca0b964] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 23:04:52.396831  749871 system_pods.go:89] "registry-6b586f9694-2ds54" [f2664081-e338-412b-893e-73fbe9c38553] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 23:04:52.396842  749871 system_pods.go:89] "registry-creds-764b6fb674-qjhd2" [8b21056d-50b4-4f41-ae47-ceccd213f5d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 23:04:52.396855  749871 system_pods.go:89] "registry-proxy-6nf92" [eb6ff610-2f42-4c13-82ff-ba9cea5c6601] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1208 23:04:52.396866  749871 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dmtj5" [136c8488-7b4d-43f4-a9be-da7b7403c0f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 23:04:52.396879  749871 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nv5mq" [b212e524-9eb0-49df-a095-121955ac40f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 23:04:52.396886  749871 system_pods.go:89] "storage-provisioner" [569ba41e-9894-4a5c-b180-27af073059ee] Running
	I1208 23:04:52.396901  749871 system_pods.go:126] duration metric: took 34.06495ms to wait for k8s-apps to be running ...
	I1208 23:04:52.396915  749871 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 23:04:52.396988  749871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 23:04:52.400196  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:52.404004  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:52.452189  749871 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1208 23:04:52.452218  749871 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1208 23:04:52.571120  749871 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 23:04:52.571146  749871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1208 23:04:52.673537  749871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 23:04:52.791198  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:52.907282  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:52.907612  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:53.292921  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:53.401574  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:53.405153  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:53.634174  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.038968081s)
	I1208 23:04:53.634224  749871 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.237201321s)
	I1208 23:04:53.634264  749871 system_svc.go:56] duration metric: took 1.237345324s WaitForService to wait for kubelet
	I1208 23:04:53.634278  749871 kubeadm.go:587] duration metric: took 11.85701972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 23:04:53.634305  749871 node_conditions.go:102] verifying NodePressure condition ...
	I1208 23:04:53.657345  749871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1208 23:04:53.657391  749871 node_conditions.go:123] node cpu capacity is 2
	I1208 23:04:53.657407  749871 node_conditions.go:105] duration metric: took 23.094708ms to run NodePressure ...
	I1208 23:04:53.657422  749871 start.go:242] waiting for startup goroutines ...
	I1208 23:04:53.876137  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:53.925355  749871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.251761874s)
	I1208 23:04:53.926539  749871 addons.go:495] Verifying addon gcp-auth=true in "addons-192260"
	I1208 23:04:53.928821  749871 out.go:179] * Verifying gcp-auth addon...
	I1208 23:04:53.930759  749871 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1208 23:04:53.944581  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:53.945510  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:53.957652  749871 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1208 23:04:53.957678  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:54.290808  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:54.402754  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:54.404387  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:54.438077  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:54.809704  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:54.910232  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:54.910341  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:54.935336  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:55.292444  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:55.402316  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:55.405264  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:55.435885  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:55.793608  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:55.907687  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:55.907692  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:55.939545  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:56.292582  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:56.403251  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:56.403319  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:56.440877  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:56.790599  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:56.903237  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:56.904799  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:56.936738  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:57.290147  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:57.400979  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:57.402570  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:57.434960  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:57.790539  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:57.900106  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:57.900858  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:57.934489  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:58.290159  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:58.399588  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:58.399850  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:58.435014  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:58.788951  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:58.900415  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:58.900463  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:58.935106  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:59.291278  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:59.400574  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:59.400890  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:59.434294  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:04:59.788987  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:04:59.904987  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:04:59.908562  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:04:59.936260  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:00.289525  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:00.399329  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:00.406753  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:00.437313  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:00.790686  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:00.900014  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:00.900703  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:00.937145  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:01.289646  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:01.399143  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:01.399169  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:01.435142  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:01.790395  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:01.899948  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:01.900728  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:01.935378  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:02.289508  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:02.400266  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:02.400961  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:02.434387  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:02.790167  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:02.899557  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:02.899561  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:02.934592  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:03.294458  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:03.401156  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:03.403319  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:03.435854  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:03.790939  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:03.904605  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:03.906722  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:03.937810  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:04.291989  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:04.404682  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:04.405664  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:04.435767  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:04.789255  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:04.902444  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:04.904379  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:04.940617  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:05.294384  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:05.401502  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:05.403569  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:05.437988  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:05.791395  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:05.902474  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:05.902761  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:05.935401  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:06.292939  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:06.401073  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:06.401103  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:06.435256  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:06.789064  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:06.900464  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:06.900934  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:07.001083  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:07.290386  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:07.401534  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:07.401648  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:07.435043  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:07.789192  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:07.899888  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:07.900599  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:07.934320  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:08.289596  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:08.399184  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:08.400028  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:08.435948  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:08.789273  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:08.904038  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:08.907353  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:08.934871  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:09.288511  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:09.402541  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:09.402932  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:09.437414  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:09.792294  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:09.901679  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:09.903017  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:09.933840  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:10.290210  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:10.400594  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:10.404036  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:10.433831  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:10.795779  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:10.904606  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:10.909136  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:10.935527  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:11.291855  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:11.403138  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:11.404001  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:11.434418  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:11.790061  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:11.902461  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:11.904528  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:11.936255  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:12.291198  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:12.405818  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:12.406082  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:12.438215  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:12.789771  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:12.899930  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:12.900945  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:12.934557  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:13.293034  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:13.413555  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:13.417247  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:13.436890  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:13.791527  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:13.899049  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:13.899720  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:13.935155  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:14.290068  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:14.401145  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:14.404569  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:14.436488  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:14.791548  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:14.901829  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:14.903077  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:14.935673  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:15.290093  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:15.524033  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:15.524040  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:15.524120  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:15.819657  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:15.903931  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:15.904245  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:15.938430  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:16.289379  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:16.401196  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:16.401847  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:16.441970  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:16.789456  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:16.899699  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:16.899992  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:16.934587  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:17.289904  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:17.401395  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:17.401977  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:17.441459  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:17.789716  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:17.899858  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:17.899881  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:17.935010  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:18.288871  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:18.401309  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:18.402041  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:18.434417  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:18.788935  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:18.899020  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:18.899172  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:18.934313  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:19.290291  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:19.401784  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:19.402799  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:19.434973  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:19.790508  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:19.911725  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:19.915515  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:19.941288  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:20.291298  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:20.401010  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:20.402643  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:20.439014  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:20.789414  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:20.900538  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:20.900566  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:20.936140  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:21.289010  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:21.399166  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:21.399725  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:21.435013  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:21.790723  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:21.901419  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:21.901490  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:21.934412  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:22.289181  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:22.399476  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:22.400187  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:22.434623  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:22.790264  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:22.902639  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:22.905422  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:22.936453  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:23.291606  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:23.400771  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:23.402212  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:23.435154  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:23.790041  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:23.899459  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:24.041283  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:24.041620  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:24.290988  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:24.403220  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:24.403560  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:24.438564  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:24.790286  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:24.901176  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:24.901193  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:24.936255  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:25.289827  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:25.399578  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 23:05:25.400002  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:25.434374  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:25.791901  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:25.900842  749871 kapi.go:107] duration metric: took 34.505473345s to wait for kubernetes.io/minikube-addons=registry ...
	I1208 23:05:25.900927  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:25.935482  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:26.291625  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:26.402355  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:26.436858  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:26.790535  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:26.900956  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:26.936307  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:27.290801  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:27.408597  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:27.434270  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:27.789216  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:27.900680  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:27.934334  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:28.289783  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:28.399123  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:28.434373  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:28.793035  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:28.901008  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:28.933708  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:29.291195  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:29.400323  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:29.436088  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:29.800630  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:29.900454  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:29.935590  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:30.293008  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:30.401495  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:30.435900  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:30.789186  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:30.903280  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:30.935571  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:31.290169  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:31.400007  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:31.434115  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:31.789747  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:31.900143  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:31.934757  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:32.289935  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:32.399453  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:32.434591  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:32.789451  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:32.899902  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:32.934930  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:33.288881  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:33.399707  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:33.434613  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:33.789722  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:33.899308  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:33.934640  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:34.289480  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:34.402760  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:34.437998  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:34.789082  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:34.902284  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:34.937433  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:35.524922  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:35.525174  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:35.525278  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:35.791416  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:35.900734  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:35.935563  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:36.293372  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:36.402142  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:36.435082  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:36.789303  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:36.902863  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:36.937869  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:37.290969  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:37.399851  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:37.435436  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:37.911912  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:37.912188  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:37.934687  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:38.290100  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:38.399805  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:38.435055  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:38.789536  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:38.900084  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:38.934214  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:39.288992  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:39.399400  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:39.435056  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:39.788926  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:39.899757  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:39.936884  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:40.290725  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:40.399920  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:40.435157  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:40.791833  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:40.906599  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:41.344706  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:41.345717  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:41.399357  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:41.436544  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:41.797647  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:41.902023  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:41.936849  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:42.292341  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:42.401533  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:42.436706  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:42.790779  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:42.900092  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:42.936437  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:43.289425  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:43.402163  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:43.435524  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:43.790972  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:43.904449  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:43.935029  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:44.289299  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:44.402697  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:44.502418  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:44.789714  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:44.911060  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:44.938575  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:45.293546  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:45.400644  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:45.436604  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:45.794457  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:45.902291  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:45.934978  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:46.447144  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:46.451509  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:46.451510  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:46.789971  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:46.912680  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:46.935255  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:47.288639  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:47.402317  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:47.434014  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:47.789994  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:47.899123  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:47.938411  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:48.289276  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:48.404644  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:48.437389  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:48.792918  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:48.901259  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:48.935296  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:49.292594  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:49.409085  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:49.434571  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:49.789657  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:49.900598  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:49.937612  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:50.294759  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:50.401034  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:50.434223  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:50.793687  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:51.064082  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:51.093613  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:51.290638  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:51.399540  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:51.438620  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:51.790125  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:51.902746  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:51.938456  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:52.290559  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:52.404046  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:52.436548  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:52.794996  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:52.902447  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:52.935053  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:53.291253  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:53.408015  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:53.433647  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:53.793476  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:53.901394  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:53.934860  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:54.289262  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:54.403686  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:54.441191  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:54.789349  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:54.900864  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:54.937191  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:55.288670  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:55.402868  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:55.439319  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:55.789920  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:55.899281  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:55.934820  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:56.291168  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:56.403019  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:56.438541  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:56.790087  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:56.900879  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:56.934995  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:57.290395  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 23:05:57.404325  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:57.435732  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:57.789307  749871 kapi.go:107] duration metric: took 1m5.50420062s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1208 23:05:57.901232  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:57.934165  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:58.406280  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:58.435444  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:58.903742  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:58.937500  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:59.401754  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:59.437864  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:05:59.900172  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:05:59.935995  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:00.404661  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:00.436651  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:00.903332  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:01.133747  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:01.400158  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:01.433780  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:01.900291  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:01.938813  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:02.399821  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:02.436740  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:02.905808  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:02.935225  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:03.516478  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:03.517064  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:03.900065  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:03.934117  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:04.401615  749871 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 23:06:04.434598  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:04.900343  749871 kapi.go:107] duration metric: took 1m13.504974899s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1208 23:06:04.936561  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:05.434156  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:05.990918  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:06.436451  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:06.937347  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:07.437057  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:07.934924  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:08.473924  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:08.935406  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:09.434755  749871 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 23:06:09.937013  749871 kapi.go:107] duration metric: took 1m16.006249793s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1208 23:06:09.938965  749871 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-192260 cluster.
	I1208 23:06:09.940403  749871 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1208 23:06:09.941839  749871 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1208 23:06:09.943279  749871 out.go:179] * Enabled addons: cloud-spanner, inspektor-gadget, storage-provisioner-rancher, nvidia-device-plugin, registry-creds, storage-provisioner, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1208 23:06:09.944392  749871 addons.go:530] duration metric: took 1m28.167086703s for enable addons: enabled=[cloud-spanner inspektor-gadget storage-provisioner-rancher nvidia-device-plugin registry-creds storage-provisioner amd-gpu-device-plugin ingress-dns metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1208 23:06:09.944461  749871 start.go:247] waiting for cluster config update ...
	I1208 23:06:09.944510  749871 start.go:256] writing updated cluster config ...
	I1208 23:06:09.944941  749871 ssh_runner.go:195] Run: rm -f paused
	I1208 23:06:09.953378  749871 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 23:06:09.957495  749871 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tfdh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:09.963107  749871 pod_ready.go:94] pod "coredns-66bc5c9577-tfdh9" is "Ready"
	I1208 23:06:09.963141  749871 pod_ready.go:86] duration metric: took 5.614443ms for pod "coredns-66bc5c9577-tfdh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:09.965626  749871 pod_ready.go:83] waiting for pod "etcd-addons-192260" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:09.971599  749871 pod_ready.go:94] pod "etcd-addons-192260" is "Ready"
	I1208 23:06:09.971630  749871 pod_ready.go:86] duration metric: took 5.980042ms for pod "etcd-addons-192260" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:09.974214  749871 pod_ready.go:83] waiting for pod "kube-apiserver-addons-192260" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:09.980708  749871 pod_ready.go:94] pod "kube-apiserver-addons-192260" is "Ready"
	I1208 23:06:09.980742  749871 pod_ready.go:86] duration metric: took 6.504097ms for pod "kube-apiserver-addons-192260" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:09.983117  749871 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-192260" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:10.357787  749871 pod_ready.go:94] pod "kube-controller-manager-addons-192260" is "Ready"
	I1208 23:06:10.357827  749871 pod_ready.go:86] duration metric: took 374.681657ms for pod "kube-controller-manager-addons-192260" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:10.558685  749871 pod_ready.go:83] waiting for pod "kube-proxy-tfs65" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:10.958635  749871 pod_ready.go:94] pod "kube-proxy-tfs65" is "Ready"
	I1208 23:06:10.958682  749871 pod_ready.go:86] duration metric: took 399.95837ms for pod "kube-proxy-tfs65" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:11.159027  749871 pod_ready.go:83] waiting for pod "kube-scheduler-addons-192260" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:11.558474  749871 pod_ready.go:94] pod "kube-scheduler-addons-192260" is "Ready"
	I1208 23:06:11.558532  749871 pod_ready.go:86] duration metric: took 399.471775ms for pod "kube-scheduler-addons-192260" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:06:11.558556  749871 pod_ready.go:40] duration metric: took 1.6051285s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 23:06:11.609742  749871 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1208 23:06:11.611500  749871 out.go:179] * Done! kubectl is now configured to use "addons-192260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.887403405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: ef6c7466-b8ca-48d5-95ac-c3fbf8e42f11,},},}" file="otel-collector/interceptors.go:62" id=0960642a-51c1-47b6-bb4b-aac35c28081b name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.887454521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0960642a-51c1-47b6-bb4b-aac35c28081b name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.887492923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0960642a-51c1-47b6-bb4b-aac35c28081b name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.920186077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e718772-60ce-4aac-9104-4cd1d6eb9e44 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.920264362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e718772-60ce-4aac-9104-4cd1d6eb9e44 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.921719918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceb388c2-bac2-4d64-a049-b78fe4261325 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.922884868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765235363922854262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545758,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceb388c2-bac2-4d64-a049-b78fe4261325 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.923938368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9939fdef-ddd8-43ff-b4f9-f5c5df3e52c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.924045601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9939fdef-ddd8-43ff-b4f9-f5c5df3e52c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.924429237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:309d5791c2012f89c133fa639bc97cc65d63dc3ea2c9188177948722a41fd816,PodSandboxId:731efca21a14e3cf5a9072e2df55cdf7ac031d15ba46ff80e0529bbe5a78c037,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765235220966935612,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07d1dbac-13dc-41e6-9fdd-5ba0ff90bb24,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1c10a78960dbcc6738cb009713183693ffabf1767e16dc97f7e50cc4401fd3,PodSandboxId:51b80e6a2519344366e085acf59674757bd0a75bcc561536689d915f88281a3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765235175764942475,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 99dafb9a-1bcd-4ac1-832c-8e428899d144,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623a3531921e9572be69093d06aca64916f623f9dff1a8278721b4fd97246f27,PodSandboxId:accc9c79814df359aeb2838a24ec9c59ef652614b85789ded8aa5ef5fe9ea394,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765235164281441867,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-pbfvz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4e6fdd3-3ed8-478f-bc16-7540a4b67e1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:827d1d0d9c0113ab0090664fdfac1de1679184e5283352afb7d7e1d68ad45551,PodSandboxId:cfd5b1f1bcd1475141c87c94d4ad81bcdf780502f9a95d8a1b1880f1c754d839,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765235144846851573,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl5kf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed401093-e82d-4f65-9206-8d9498758e1a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5603a4318af138b9fe998c55c0a2c868ba3187ab6a11c27fc6074687b65c4eba,PodSandboxId:f7145ca2072cd660db961be9390af6ddaf2a7e3c6c6879aedf214665499465d7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765235144489528360,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9szqh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d0d1cc8-b8f6-440e-be5e-c7127ec93094,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13df55e6ba1b32bb0c9abd08e2780ca3febbd15190f7985779782d183c78c30,PodSandboxId:2fe87c9f5ccae9a7fa33488a0ecabe1b419061d192029214d3ed4da58af98efe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765235116440459052,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 799c1614-93a0-4a97-9610-3b9c3136c6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d36f527b2c979470bf5fc9bc13fe432a883613a4760908a42a6f07aa52d1b3,PodSandboxId:ed4c809f71b1ba4816088a7cc4950f8bdfa2f93fcfd835333621b38d38736ac1,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765235100598413913,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bn8cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f327f75-80d6-49f4-9738-53ef956f000c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cbb52576ae8eaad5b2bf011952dc1c5d2fc6272d406a8e5429c2ceb8f2b97f,PodSandboxId:0fd9a777eff16404334345de265a2f00f8ecd31fa2ba5003c5faa89b31d0a437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765235089851970241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 569ba41e-9894-4a5c-b180-27af073059ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1a7e8a872c566692d266604bcd725a4deec0a33f5203bf0e76d640f200e8e0,PodSandboxId:8de01794a04e5025f921db8404d0081b0da5736b74a75ce71278256f0f12297f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765235083144448814,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tfdh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57ca4c3e-4d53-497e-a949-caa33c176f0a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a127933081384352534f979af56716a573e5f0ad42ab91c8ab6bea0202bc588c,PodSandboxId:c072205f9a22f969c63a5d53d18eae030ff0fd7708ca62d9d2aea1899f2611d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765235082402587781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189229ed-9887-4087-b678-fda9852fa12a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc70613979ccfc27eb729fbfa7d7bf8e6b27fa7838d28e76f4b0c505e8cd883,PodSandboxId:115339d3f8a7a4ab89aa3035fd4d956c0feb6b5e6618995ebbf13ec5066915b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765235071041891386,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f3895b3918862c80f063ceeb3f1f2c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7304ec223fc4d8159ccc5654439f8e91a4f51f772b800e8c1c720bd3371231f9,PodSandboxId:a82030babf4af1c8dc3428bda58a7c879c9aaaee33a83c3d548f84ca562bcf27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765235071014178728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7020def229b49676f6e7d95bb226b,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d201989df3a2f26f8d34d1fe4440766b6c1213c24c98e04487a4ae97447f46,PodSandboxId:f30a0e3d76538fca59124aa3ab73d7dd92ee8f1581a6247e75272e3c780f328f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765235071019074852,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0f82799eab2b7dbc74a30f6e5fcec9,},Annotations:map[string]string{io.kubernetes.
container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42e4afd3533937d3c005d4a7690a61a9be9446a85b23e45df7fdb962387f85f8,PodSandboxId:8dad0731db34f3ca6c59a9012fe213c96cb2d3d1ff9557d1c0cbb0d9cde8f623,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765235070960043014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-192260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b578d1402769308033b53b4d4ebed430,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9939fdef-ddd8-43ff-b4f9-f5c5df3e52c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.944727545Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.957955293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ac3a7f5-fdd8-4a98-9c09-026a4fc82ea7 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.958077304Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ac3a7f5-fdd8-4a98-9c09-026a4fc82ea7 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.960241729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7e7ec90-bb92-4502-b7fa-a5e1756e83f8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.961424273Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765235363961392994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545758,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7e7ec90-bb92-4502-b7fa-a5e1756e83f8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.962317955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7abfb120-aa3e-4914-9a0f-305d6bcbf305 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.962401630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7abfb120-aa3e-4914-9a0f-305d6bcbf305 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.962762595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:309d5791c2012f89c133fa639bc97cc65d63dc3ea2c9188177948722a41fd816,PodSandboxId:731efca21a14e3cf5a9072e2df55cdf7ac031d15ba46ff80e0529bbe5a78c037,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765235220966935612,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07d1dbac-13dc-41e6-9fdd-5ba0ff90bb24,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1c10a78960dbcc6738cb009713183693ffabf1767e16dc97f7e50cc4401fd3,PodSandboxId:51b80e6a2519344366e085acf59674757bd0a75bcc561536689d915f88281a3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765235175764942475,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 99dafb9a-1bcd-4ac1-832c-8e428899d144,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623a3531921e9572be69093d06aca64916f623f9dff1a8278721b4fd97246f27,PodSandboxId:accc9c79814df359aeb2838a24ec9c59ef652614b85789ded8aa5ef5fe9ea394,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765235164281441867,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-pbfvz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4e6fdd3-3ed8-478f-bc16-7540a4b67e1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:827d1d0d9c0113ab0090664fdfac1de1679184e5283352afb7d7e1d68ad45551,PodSandboxId:cfd5b1f1bcd1475141c87c94d4ad81bcdf780502f9a95d8a1b1880f1c754d839,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765235144846851573,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl5kf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed401093-e82d-4f65-9206-8d9498758e1a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5603a4318af138b9fe998c55c0a2c868ba3187ab6a11c27fc6074687b65c4eba,PodSandboxId:f7145ca2072cd660db961be9390af6ddaf2a7e3c6c6879aedf214665499465d7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765235144489528360,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9szqh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d0d1cc8-b8f6-440e-be5e-c7127ec93094,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13df55e6ba1b32bb0c9abd08e2780ca3febbd15190f7985779782d183c78c30,PodSandboxId:2fe87c9f5ccae9a7fa33488a0ecabe1b419061d192029214d3ed4da58af98efe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765235116440459052,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 799c1614-93a0-4a97-9610-3b9c3136c6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d36f527b2c979470bf5fc9bc13fe432a883613a4760908a42a6f07aa52d1b3,PodSandboxId:ed4c809f71b1ba4816088a7cc4950f8bdfa2f93fcfd835333621b38d38736ac1,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765235100598413913,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bn8cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f327f75-80d6-49f4-9738-53ef956f000c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cbb52576ae8eaad5b2bf011952dc1c5d2fc6272d406a8e5429c2ceb8f2b97f,PodSandboxId:0fd9a777eff16404334345de265a2f00f8ecd31fa2ba5003c5faa89b31d0a437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765235089851970241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 569ba41e-9894-4a5c-b180-27af073059ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1a7e8a872c566692d266604bcd725a4deec0a33f5203bf0e76d640f200e8e0,PodSandboxId:8de01794a04e5025f921db8404d0081b0da5736b74a75ce71278256f0f12297f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765235083144448814,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tfdh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57ca4c3e-4d53-497e-a949-caa33c176f0a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a127933081384352534f979af56716a573e5f0ad42ab91c8ab6bea0202bc588c,PodSandboxId:c072205f9a22f969c63a5d53d18eae030ff0fd7708ca62d9d2aea1899f2611d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765235082402587781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189229ed-9887-4087-b678-fda9852fa12a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc70613979ccfc27eb729fbfa7d7bf8e6b27fa7838d28e76f4b0c505e8cd883,PodSandboxId:115339d3f8a7a4ab89aa3035fd4d956c0feb6b5e6618995ebbf13ec5066915b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765235071041891386,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f3895b3918862c80f063ceeb3f1f2c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7304ec223fc4d8159ccc5654439f8e91a4f51f772b800e8c1c720bd3371231f9,PodSandboxId:a82030babf4af1c8dc3428bda58a7c879c9aaaee33a83c3d548f84ca562bcf27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765235071014178728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7020def229b49676f6e7d95bb226b,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d201989df3a2f26f8d34d1fe4440766b6c1213c24c98e04487a4ae97447f46,PodSandboxId:f30a0e3d76538fca59124aa3ab73d7dd92ee8f1581a6247e75272e3c780f328f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765235071019074852,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0f82799eab2b7dbc74a30f6e5fcec9,},Annotations:map[string]string{io.kubernetes.
container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42e4afd3533937d3c005d4a7690a61a9be9446a85b23e45df7fdb962387f85f8,PodSandboxId:8dad0731db34f3ca6c59a9012fe213c96cb2d3d1ff9557d1c0cbb0d9cde8f623,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765235070960043014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-192260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b578d1402769308033b53b4d4ebed430,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7abfb120-aa3e-4914-9a0f-305d6bcbf305 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.995506077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89f6e862-5943-4037-affe-6a95a41729b7 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.995613194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89f6e862-5943-4037-affe-6a95a41729b7 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.996789524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e837b4ef-55d0-4a1b-a8e6-19c1bba698a3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.997976378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765235363997943988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545758,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e837b4ef-55d0-4a1b-a8e6-19c1bba698a3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.999107736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a6a46b6-b336-406c-8dd4-bda7778b9bb5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:23 addons-192260 crio[815]: time="2025-12-08 23:09:23.999266999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a6a46b6-b336-406c-8dd4-bda7778b9bb5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:09:24 addons-192260 crio[815]: time="2025-12-08 23:09:24.000172636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:309d5791c2012f89c133fa639bc97cc65d63dc3ea2c9188177948722a41fd816,PodSandboxId:731efca21a14e3cf5a9072e2df55cdf7ac031d15ba46ff80e0529bbe5a78c037,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765235220966935612,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07d1dbac-13dc-41e6-9fdd-5ba0ff90bb24,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d1c10a78960dbcc6738cb009713183693ffabf1767e16dc97f7e50cc4401fd3,PodSandboxId:51b80e6a2519344366e085acf59674757bd0a75bcc561536689d915f88281a3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765235175764942475,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 99dafb9a-1bcd-4ac1-832c-8e428899d144,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623a3531921e9572be69093d06aca64916f623f9dff1a8278721b4fd97246f27,PodSandboxId:accc9c79814df359aeb2838a24ec9c59ef652614b85789ded8aa5ef5fe9ea394,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765235164281441867,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-pbfvz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4e6fdd3-3ed8-478f-bc16-7540a4b67e1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:827d1d0d9c0113ab0090664fdfac1de1679184e5283352afb7d7e1d68ad45551,PodSandboxId:cfd5b1f1bcd1475141c87c94d4ad81bcdf780502f9a95d8a1b1880f1c754d839,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765235144846851573,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl5kf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed401093-e82d-4f65-9206-8d9498758e1a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5603a4318af138b9fe998c55c0a2c868ba3187ab6a11c27fc6074687b65c4eba,PodSandboxId:f7145ca2072cd660db961be9390af6ddaf2a7e3c6c6879aedf214665499465d7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765235144489528360,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9szqh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d0d1cc8-b8f6-440e-be5e-c7127ec93094,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13df55e6ba1b32bb0c9abd08e2780ca3febbd15190f7985779782d183c78c30,PodSandboxId:2fe87c9f5ccae9a7fa33488a0ecabe1b419061d192029214d3ed4da58af98efe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765235116440459052,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 799c1614-93a0-4a97-9610-3b9c3136c6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d36f527b2c979470bf5fc9bc13fe432a883613a4760908a42a6f07aa52d1b3,PodSandboxId:ed4c809f71b1ba4816088a7cc4950f8bdfa2f93fcfd835333621b38d38736ac1,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765235100598413913,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bn8cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f327f75-80d6-49f4-9738-53ef956f000c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cbb52576ae8eaad5b2bf011952dc1c5d2fc6272d406a8e5429c2ceb8f2b97f,PodSandboxId:0fd9a777eff16404334345de265a2f00f8ecd31fa2ba5003c5faa89b31d0a437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765235089851970241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 569ba41e-9894-4a5c-b180-27af073059ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1a7e8a872c566692d266604bcd725a4deec0a33f5203bf0e76d640f200e8e0,PodSandboxId:8de01794a04e5025f921db8404d0081b0da5736b74a75ce71278256f0f12297f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765235083144448814,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tfdh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57ca4c3e-4d53-497e-a949-caa33c176f0a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a127933081384352534f979af56716a573e5f0ad42ab91c8ab6bea0202bc588c,PodSandboxId:c072205f9a22f969c63a5d53d18eae030ff0fd7708ca62d9d2aea1899f2611d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765235082402587781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189229ed-9887-4087-b678-fda9852fa12a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc70613979ccfc27eb729fbfa7d7bf8e6b27fa7838d28e76f4b0c505e8cd883,PodSandboxId:115339d3f8a7a4ab89aa3035fd4d956c0feb6b5e6618995ebbf13ec5066915b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765235071041891386,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f3895b3918862c80f063ceeb3f1f2c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7304ec223fc4d8159ccc5654439f8e91a4f51f772b800e8c1c720bd3371231f9,PodSandboxId:a82030babf4af1c8dc3428bda58a7c879c9aaaee33a83c3d548f84ca562bcf27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765235071014178728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a7020def229b49676f6e7d95bb226b,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d201989df3a2f26f8d34d1fe4440766b6c1213c24c98e04487a4ae97447f46,PodSandboxId:f30a0e3d76538fca59124aa3ab73d7dd92ee8f1581a6247e75272e3c780f328f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765235071019074852,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-192260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0f82799eab2b7dbc74a30f6e5fcec9,},Annotations:map[string]string{io.kubernetes.
container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42e4afd3533937d3c005d4a7690a61a9be9446a85b23e45df7fdb962387f85f8,PodSandboxId:8dad0731db34f3ca6c59a9012fe213c96cb2d3d1ff9557d1c0cbb0d9cde8f623,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765235070960043014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-192260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b578d1402769308033b53b4d4ebed430,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a6a46b6-b336-406c-8dd4-bda7778b9bb5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	309d5791c2012       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                           2 minutes ago       Running             nginx                     0                   731efca21a14e       nginx                                       default
	7d1c10a78960d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   51b80e6a25193       busybox                                     default
	623a3531921e9       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   accc9c79814df       ingress-nginx-controller-85d4c799dd-pbfvz   ingress-nginx
	827d1d0d9c011       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                             3 minutes ago       Exited              patch                     1                   cfd5b1f1bcd14       ingress-nginx-admission-patch-tl5kf         ingress-nginx
	5603a4318af13       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   f7145ca2072cd       ingress-nginx-admission-create-9szqh        ingress-nginx
	d13df55e6ba1b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   2fe87c9f5ccae       kube-ingress-dns-minikube                   kube-system
	14d36f527b2c9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   ed4c809f71b1b       amd-gpu-device-plugin-bn8cc                 kube-system
	42cbb52576ae8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   0fd9a777eff16       storage-provisioner                         kube-system
	5b1a7e8a872c5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   8de01794a04e5       coredns-66bc5c9577-tfdh9                    kube-system
	a127933081384       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   c072205f9a22f       kube-proxy-tfs65                            kube-system
	fbc70613979cc       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   115339d3f8a7a       kube-controller-manager-addons-192260       kube-system
	55d201989df3a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   f30a0e3d76538       etcd-addons-192260                          kube-system
	7304ec223fc4d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   a82030babf4af       kube-apiserver-addons-192260                kube-system
	42e4afd353393       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   8dad0731db34f       kube-scheduler-addons-192260                kube-system
	
	
	==> coredns [5b1a7e8a872c566692d266604bcd725a4deec0a33f5203bf0e76d640f200e8e0] <==
	[INFO] 10.244.0.8:35779 - 9641 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000148949s
	[INFO] 10.244.0.8:35779 - 55892 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000093858s
	[INFO] 10.244.0.8:35779 - 46916 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000054147s
	[INFO] 10.244.0.8:35779 - 2974 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000089193s
	[INFO] 10.244.0.8:35779 - 15550 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000078355s
	[INFO] 10.244.0.8:35779 - 40033 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000059189s
	[INFO] 10.244.0.8:35779 - 29017 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000067918s
	[INFO] 10.244.0.8:42761 - 59138 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156469s
	[INFO] 10.244.0.8:42761 - 59436 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001032097s
	[INFO] 10.244.0.8:41184 - 3077 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000136537s
	[INFO] 10.244.0.8:41184 - 3550 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114778s
	[INFO] 10.244.0.8:48433 - 6355 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000119814s
	[INFO] 10.244.0.8:48433 - 6092 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00025625s
	[INFO] 10.244.0.8:39462 - 64876 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076855s
	[INFO] 10.244.0.8:39462 - 65081 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000137047s
	[INFO] 10.244.0.23:40006 - 65075 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000246395s
	[INFO] 10.244.0.23:45892 - 26590 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000796802s
	[INFO] 10.244.0.23:52444 - 57723 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000380129s
	[INFO] 10.244.0.23:51338 - 10382 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000411426s
	[INFO] 10.244.0.23:40715 - 63509 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129254s
	[INFO] 10.244.0.23:54034 - 23794 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000247928s
	[INFO] 10.244.0.23:38259 - 6271 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004693767s
	[INFO] 10.244.0.23:53654 - 44481 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.004601682s
	[INFO] 10.244.0.28:34858 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000268348s
	[INFO] 10.244.0.28:41543 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158058s
	
	
	==> describe nodes <==
	Name:               addons-192260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-192260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2846307350d09469fc6b6b47dd0c4837fa740d9c
	                    minikube.k8s.io/name=addons-192260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T23_04_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-192260
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 23:04:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-192260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 23:09:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 23:07:10 +0000   Mon, 08 Dec 2025 23:04:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 23:07:10 +0000   Mon, 08 Dec 2025 23:04:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 23:07:10 +0000   Mon, 08 Dec 2025 23:04:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 23:07:10 +0000   Mon, 08 Dec 2025 23:04:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    addons-192260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 4948afafde9e4c6c8df5d1bc42d49810
	  System UUID:                4948afaf-de9e-4c6c-8df5-d1bc42d49810
	  Boot ID:                    ebf44fa4-fe4c-45f1-a39d-404400ca7886
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	  default                     hello-world-app-5d498dc89-k5rks              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-pbfvz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m34s
	  kube-system                 amd-gpu-device-plugin-bn8cc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 coredns-66bc5c9577-tfdh9                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m43s
	  kube-system                 etcd-addons-192260                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m48s
	  kube-system                 kube-apiserver-addons-192260                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-controller-manager-addons-192260        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-proxy-tfs65                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-scheduler-addons-192260                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m40s  kube-proxy       
	  Normal  Starting                 4m48s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m48s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m48s  kubelet          Node addons-192260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s  kubelet          Node addons-192260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s  kubelet          Node addons-192260 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m47s  kubelet          Node addons-192260 status is now: NodeReady
	  Normal  RegisteredNode           4m44s  node-controller  Node addons-192260 event: Registered Node addons-192260 in Controller
	  Normal  CIDRAssignmentFailed     4m44s  cidrAllocator    Node addons-192260 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +1.776924] kauditd_printk_skb: 356 callbacks suppressed
	[Dec 8 23:05] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.540714] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.000674] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.107011] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.005534] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.969402] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.076233] kauditd_printk_skb: 41 callbacks suppressed
	[  +3.006680] kauditd_printk_skb: 121 callbacks suppressed
	[  +0.000049] kauditd_printk_skb: 126 callbacks suppressed
	[Dec 8 23:06] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.393167] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.250468] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.861994] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.688838] kauditd_printk_skb: 89 callbacks suppressed
	[  +2.508779] kauditd_printk_skb: 66 callbacks suppressed
	[  +0.626842] kauditd_printk_skb: 112 callbacks suppressed
	[  +1.511650] kauditd_printk_skb: 185 callbacks suppressed
	[  +0.461135] kauditd_printk_skb: 82 callbacks suppressed
	[Dec 8 23:07] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.577904] kauditd_printk_skb: 64 callbacks suppressed
	[  +0.000037] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.602630] kauditd_printk_skb: 61 callbacks suppressed
	[Dec 8 23:09] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [55d201989df3a2f26f8d34d1fe4440766b6c1213c24c98e04487a4ae97447f46] <==
	{"level":"warn","ts":"2025-12-08T23:06:01.127423Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.302468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T23:06:01.127462Z","caller":"traceutil/trace.go:172","msg":"trace[1294694312] range","detail":"{range_begin:/registry/validatingwebhookconfigurations; range_end:; response_count:0; response_revision:1133; }","duration":"194.351919ms","start":"2025-12-08T23:06:00.933100Z","end":"2025-12-08T23:06:01.127452Z","steps":["trace[1294694312] 'agreement among raft nodes before linearized reading'  (duration: 194.257481ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T23:06:01.129020Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.570061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T23:06:01.129053Z","caller":"traceutil/trace.go:172","msg":"trace[601339431] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1133; }","duration":"192.609492ms","start":"2025-12-08T23:06:00.936436Z","end":"2025-12-08T23:06:01.129045Z","steps":["trace[601339431] 'agreement among raft nodes before linearized reading'  (duration: 192.552686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T23:06:01.129252Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.091324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T23:06:01.129272Z","caller":"traceutil/trace.go:172","msg":"trace[1720038533] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:1133; }","duration":"190.11604ms","start":"2025-12-08T23:06:00.939150Z","end":"2025-12-08T23:06:01.129266Z","steps":["trace[1720038533] 'agreement among raft nodes before linearized reading'  (duration: 190.053217ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T23:06:03.509008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.360187ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2025-12-08T23:06:03.509764Z","caller":"traceutil/trace.go:172","msg":"trace[655540464] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1135; }","duration":"173.119657ms","start":"2025-12-08T23:06:03.336630Z","end":"2025-12-08T23:06:03.509750Z","steps":["trace[655540464] 'range keys from in-memory index tree'  (duration: 172.245501ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T23:06:03.509413Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.914561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T23:06:03.510038Z","caller":"traceutil/trace.go:172","msg":"trace[430686116] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"114.56779ms","start":"2025-12-08T23:06:03.395459Z","end":"2025-12-08T23:06:03.510027Z","steps":["trace[430686116] 'range keys from in-memory index tree'  (duration: 113.832111ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T23:06:03.509643Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.976043ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T23:06:03.510184Z","caller":"traceutil/trace.go:172","msg":"trace[514472586] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1135; }","duration":"150.523812ms","start":"2025-12-08T23:06:03.359650Z","end":"2025-12-08T23:06:03.510174Z","steps":["trace[514472586] 'range keys from in-memory index tree'  (duration: 149.966113ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T23:06:34.976639Z","caller":"traceutil/trace.go:172","msg":"trace[1786501468] transaction","detail":"{read_only:false; response_revision:1328; number_of_response:1; }","duration":"143.1594ms","start":"2025-12-08T23:06:34.833462Z","end":"2025-12-08T23:06:34.976621Z","steps":["trace[1786501468] 'process raft request'  (duration: 142.971023ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T23:06:39.787465Z","caller":"traceutil/trace.go:172","msg":"trace[897648326] linearizableReadLoop","detail":"{readStateIndex:1389; appliedIndex:1389; }","duration":"276.542765ms","start":"2025-12-08T23:06:39.510904Z","end":"2025-12-08T23:06:39.787447Z","steps":["trace[897648326] 'read index received'  (duration: 276.53353ms)","trace[897648326] 'applied index is now lower than readState.Index'  (duration: 4.838µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-08T23:06:39.787682Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"276.761757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-b5bd1323-8a56-4e58-93b7-550ac9856f8e\" limit:1 ","response":"range_response_count:1 size:4422"}
	{"level":"info","ts":"2025-12-08T23:06:39.787709Z","caller":"traceutil/trace.go:172","msg":"trace[710840876] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-b5bd1323-8a56-4e58-93b7-550ac9856f8e; range_end:; response_count:1; response_revision:1346; }","duration":"276.802816ms","start":"2025-12-08T23:06:39.510900Z","end":"2025-12-08T23:06:39.787702Z","steps":["trace[710840876] 'agreement among raft nodes before linearized reading'  (duration: 276.632524ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T23:06:39.787927Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.894799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-b5bd1323-8a56-4e58-93b7-550ac9856f8e\" limit:1 ","response":"range_response_count:1 size:4422"}
	{"level":"info","ts":"2025-12-08T23:06:39.788007Z","caller":"traceutil/trace.go:172","msg":"trace[1850873860] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-b5bd1323-8a56-4e58-93b7-550ac9856f8e; range_end:; response_count:1; response_revision:1347; }","duration":"241.98191ms","start":"2025-12-08T23:06:39.546015Z","end":"2025-12-08T23:06:39.787997Z","steps":["trace[1850873860] 'agreement among raft nodes before linearized reading'  (duration: 241.817789ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T23:06:39.788204Z","caller":"traceutil/trace.go:172","msg":"trace[1503838877] transaction","detail":"{read_only:false; response_revision:1347; number_of_response:1; }","duration":"329.282907ms","start":"2025-12-08T23:06:39.458910Z","end":"2025-12-08T23:06:39.788193Z","steps":["trace[1503838877] 'process raft request'  (duration: 328.83197ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T23:06:39.788654Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-08T23:06:39.458891Z","time spent":"329.334405ms","remote":"127.0.0.1:36102","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":11100,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/addons-192260\" mod_revision:1160 > success:<request_put:<key:\"/registry/minions/addons-192260\" value_size:11061 >> failure:<request_range:<key:\"/registry/minions/addons-192260\" > >"}
	{"level":"info","ts":"2025-12-08T23:06:48.719302Z","caller":"traceutil/trace.go:172","msg":"trace[525970162] transaction","detail":"{read_only:false; response_revision:1406; number_of_response:1; }","duration":"304.440811ms","start":"2025-12-08T23:06:48.414810Z","end":"2025-12-08T23:06:48.719251Z","steps":["trace[525970162] 'process raft request'  (duration: 304.28114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T23:06:48.719473Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-08T23:06:48.414792Z","time spent":"304.617639ms","remote":"127.0.0.1:36264","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1367 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2025-12-08T23:06:48.720221Z","caller":"traceutil/trace.go:172","msg":"trace[222860549] linearizableReadLoop","detail":"{readStateIndex:1453; appliedIndex:1454; }","duration":"292.828325ms","start":"2025-12-08T23:06:48.427264Z","end":"2025-12-08T23:06:48.720093Z","steps":["trace[222860549] 'read index received'  (duration: 292.820382ms)","trace[222860549] 'applied index is now lower than readState.Index'  (duration: 6.822µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-08T23:06:48.720483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"293.158651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T23:06:48.720547Z","caller":"traceutil/trace.go:172","msg":"trace[95067913] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1406; }","duration":"293.296744ms","start":"2025-12-08T23:06:48.427242Z","end":"2025-12-08T23:06:48.720539Z","steps":["trace[95067913] 'agreement among raft nodes before linearized reading'  (duration: 293.132473ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:09:24 up 5 min,  0 users,  load average: 0.49, 0.97, 0.53
	Linux addons-192260 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7304ec223fc4d8159ccc5654439f8e91a4f51f772b800e8c1c720bd3371231f9] <==
	 > logger="UnhandledError"
	E1208 23:05:28.581831       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.80.140:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.80.140:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.80.140:443: connect: connection refused" logger="UnhandledError"
	E1208 23:05:28.583533       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.80.140:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.80.140:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.80.140:443: connect: connection refused" logger="UnhandledError"
	E1208 23:05:28.589293       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.80.140:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.80.140:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.80.140:443: connect: connection refused" logger="UnhandledError"
	I1208 23:05:28.705351       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1208 23:06:23.407667       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:50994: use of closed network connection
	E1208 23:06:23.631226       1 conn.go:339] Error on socket receive: read tcp 192.168.39.248:8443->192.168.39.1:36654: use of closed network connection
	I1208 23:06:33.123625       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.163.147"}
	I1208 23:06:55.900869       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1208 23:06:56.097587       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.253.177"}
	E1208 23:07:05.430778       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1208 23:07:12.277906       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1208 23:07:29.129215       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 23:07:29.131195       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 23:07:29.172762       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 23:07:29.172892       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 23:07:29.190500       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 23:07:29.191218       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 23:07:29.227874       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 23:07:29.228095       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 23:07:29.598207       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W1208 23:07:30.173788       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1208 23:07:30.228259       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1208 23:07:30.356239       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1208 23:09:22.934809       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.247.221"}
	
	
	==> kube-controller-manager [fbc70613979ccfc27eb729fbfa7d7bf8e6b27fa7838d28e76f4b0c505e8cd883] <==
	E1208 23:07:38.987994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:07:40.529706       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:07:40.530873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1208 23:07:40.804150       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1208 23:07:40.804291       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 23:07:40.891926       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1208 23:07:40.891967       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1208 23:07:45.025443       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:07:45.026710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:07:49.196800       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:07:49.198295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:07:50.512365       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:07:50.513554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:08:01.692718       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:08:01.694235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:08:10.742817       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:08:10.743813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:08:15.266527       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:08:15.267833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:08:42.447944       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:08:42.449010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:08:51.597343       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:08:51.598459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 23:08:52.694678       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 23:08:52.695707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a127933081384352534f979af56716a573e5f0ad42ab91c8ab6bea0202bc588c] <==
	I1208 23:04:42.976747       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 23:04:43.092847       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 23:04:43.094488       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.248"]
	E1208 23:04:43.095806       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 23:04:43.268557       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1208 23:04:43.268644       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1208 23:04:43.268668       1 server_linux.go:132] "Using iptables Proxier"
	I1208 23:04:43.401506       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 23:04:43.404220       1 server.go:527] "Version info" version="v1.34.2"
	I1208 23:04:43.404346       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 23:04:43.503367       1 config.go:309] "Starting node config controller"
	I1208 23:04:43.503384       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 23:04:43.503391       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 23:04:43.512788       1 config.go:200] "Starting service config controller"
	I1208 23:04:43.541270       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 23:04:43.518598       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 23:04:43.612850       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 23:04:43.612861       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 23:04:43.518577       1 config.go:106] "Starting endpoint slice config controller"
	I1208 23:04:43.612869       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 23:04:43.612873       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1208 23:04:43.704204       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [42e4afd3533937d3c005d4a7690a61a9be9446a85b23e45df7fdb962387f85f8] <==
	E1208 23:04:33.630401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 23:04:33.630489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 23:04:33.630772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1208 23:04:33.630921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 23:04:33.631153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 23:04:33.631194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1208 23:04:34.449819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1208 23:04:34.471519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1208 23:04:34.569901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 23:04:34.578520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1208 23:04:34.587399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1208 23:04:34.597840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1208 23:04:34.650109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1208 23:04:34.665576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 23:04:34.675051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1208 23:04:34.716749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1208 23:04:34.723598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 23:04:34.757236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1208 23:04:34.769508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 23:04:34.800166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1208 23:04:34.826443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 23:04:34.881421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 23:04:34.951391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1208 23:04:35.033565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1208 23:04:37.422300       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 23:07:40 addons-192260 kubelet[1519]: I1208 23:07:40.025754    1519 scope.go:117] "RemoveContainer" containerID="3abae149b9134d1328c1373097f0272c6a7a2bb734657d2a7a21a51d8654e5cb"
	Dec 08 23:07:46 addons-192260 kubelet[1519]: E1208 23:07:46.438062    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235266437368392 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:07:46 addons-192260 kubelet[1519]: E1208 23:07:46.438143    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235266437368392 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:07:56 addons-192260 kubelet[1519]: E1208 23:07:56.441168    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235276440717271 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:07:56 addons-192260 kubelet[1519]: E1208 23:07:56.441198    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235276440717271 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:07:58 addons-192260 kubelet[1519]: I1208 23:07:58.293768    1519 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-bn8cc" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 23:08:06 addons-192260 kubelet[1519]: E1208 23:08:06.444236    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235286443722711 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:06 addons-192260 kubelet[1519]: E1208 23:08:06.444269    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235286443722711 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:16 addons-192260 kubelet[1519]: E1208 23:08:16.447563    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235296447160760 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:16 addons-192260 kubelet[1519]: E1208 23:08:16.447592    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235296447160760 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:26 addons-192260 kubelet[1519]: E1208 23:08:26.450109    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235306449569678 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:26 addons-192260 kubelet[1519]: E1208 23:08:26.450197    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235306449569678 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:36 addons-192260 kubelet[1519]: E1208 23:08:36.452990    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235316452397500 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:36 addons-192260 kubelet[1519]: E1208 23:08:36.453018    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235316452397500 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:45 addons-192260 kubelet[1519]: I1208 23:08:45.293649    1519 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 23:08:46 addons-192260 kubelet[1519]: E1208 23:08:46.455582    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235326455054223 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:46 addons-192260 kubelet[1519]: E1208 23:08:46.455606    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235326455054223 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:56 addons-192260 kubelet[1519]: E1208 23:08:56.458956    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235336458394666 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:08:56 addons-192260 kubelet[1519]: E1208 23:08:56.459002    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235336458394666 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:09:06 addons-192260 kubelet[1519]: E1208 23:09:06.461624    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235346461026591 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:09:06 addons-192260 kubelet[1519]: E1208 23:09:06.461652    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235346461026591 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:09:16 addons-192260 kubelet[1519]: E1208 23:09:16.464018    1519 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765235356463555391 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:09:16 addons-192260 kubelet[1519]: E1208 23:09:16.464065    1519 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765235356463555391 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545758} inodes_used:{value:187}}"
	Dec 08 23:09:22 addons-192260 kubelet[1519]: I1208 23:09:22.982663    1519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svv5m\" (UniqueName: \"kubernetes.io/projected/ef6c7466-b8ca-48d5-95ac-c3fbf8e42f11-kube-api-access-svv5m\") pod \"hello-world-app-5d498dc89-k5rks\" (UID: \"ef6c7466-b8ca-48d5-95ac-c3fbf8e42f11\") " pod="default/hello-world-app-5d498dc89-k5rks"
	Dec 08 23:09:24 addons-192260 kubelet[1519]: I1208 23:09:24.293725    1519 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-bn8cc" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [42cbb52576ae8eaad5b2bf011952dc1c5d2fc6272d406a8e5429c2ceb8f2b97f] <==
	W1208 23:08:59.994821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:01.997860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:02.002752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:04.006320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:04.011753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:06.015166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:06.022604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:08.026213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:08.031204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:10.036038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:10.044344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:12.048040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:12.053012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:14.056797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:14.062671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:16.066466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:16.071750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:18.078108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:18.085392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:20.088750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:20.093370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:22.097104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:22.102911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:24.107098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 23:09:24.112714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-192260 -n addons-192260
helpers_test.go:269: (dbg) Run:  kubectl --context addons-192260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-k5rks ingress-nginx-admission-create-9szqh ingress-nginx-admission-patch-tl5kf
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-192260 describe pod hello-world-app-5d498dc89-k5rks ingress-nginx-admission-create-9szqh ingress-nginx-admission-patch-tl5kf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-192260 describe pod hello-world-app-5d498dc89-k5rks ingress-nginx-admission-create-9szqh ingress-nginx-admission-patch-tl5kf: exit status 1 (76.364055ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-k5rks
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-192260/192.168.39.248
	Start Time:       Mon, 08 Dec 2025 23:09:22 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svv5m (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-svv5m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-k5rks to addons-192260
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9szqh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tl5kf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-192260 describe pod hello-world-app-5d498dc89-k5rks ingress-nginx-admission-create-9szqh ingress-nginx-admission-patch-tl5kf: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-192260 addons disable ingress --alsologtostderr -v=1: (7.737194118s)
--- FAIL: TestAddons/parallel/Ingress (158.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (3.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image rm kicbase/echo-server:functional-136601 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 image rm kicbase/echo-server:functional-136601 --alsologtostderr: (3.565203883s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-136601" to be removed from minikube but still exists
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (3.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1208 23:17:59.903611  757852 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:17:59.903930  757852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:17:59.903941  757852 out.go:374] Setting ErrFile to fd 2...
	I1208 23:17:59.903945  757852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:17:59.904187  757852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:17:59.904776  757852 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 23:17:59.904876  757852 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 23:17:59.907376  757852 ssh_runner.go:195] Run: systemctl --version
	I1208 23:17:59.909906  757852 main.go:143] libmachine: domain functional-136601 has defined MAC address 52:54:00:f9:55:24 in network mk-functional-136601
	I1208 23:17:59.910378  757852 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:55:24", ip: ""} in network mk-functional-136601: {Iface:virbr1 ExpiryTime:2025-12-09 00:15:23 +0000 UTC Type:0 Mac:52:54:00:f9:55:24 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:functional-136601 Clientid:01:52:54:00:f9:55:24}
	I1208 23:17:59.910407  757852 main.go:143] libmachine: domain functional-136601 has defined IP address 192.168.39.20 and MAC address 52:54:00:f9:55:24 in network mk-functional-136601
	I1208 23:17:59.910544  757852 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-136601/id_rsa Username:docker}
	I1208 23:18:00.012742  757852 cache_images.go:291] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar
	I1208 23:18:00.012925  757852 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/echo-server-save.tar
	I1208 23:18:00.021417  757852 ssh_runner.go:362] scp /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --> /var/lib/minikube/images/echo-server-save.tar (4950016 bytes)
	I1208 23:18:00.256210  757852 crio.go:275] Loading image: /var/lib/minikube/images/echo-server-save.tar
	I1208 23:18:00.256311  757852 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/echo-server-save.tar
	W1208 23:18:00.569589  757852 cache_images.go:255] Failed to load cached images for "functional-136601": loading images: CRI-O load /var/lib/minikube/images/echo-server-save.tar: crio load image: sudo podman load -i /var/lib/minikube/images/echo-server-save.tar: Process exited with status 125
	stdout:
	
	stderr:
	Getting image source signatures
	Copying blob sha256:385288f36387f526d4826ab7d5cf1ab0e58bb5684a8257e8d19d9da3773b85da
	Copying config sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
	Writing manifest to image destination
	Storing signatures
	Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)
	I1208 23:18:00.569644  757852 cache_images.go:267] failed pushing to: functional-136601

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-136601
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image save --daemon kicbase/echo-server:functional-136601 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 image save --daemon kicbase/echo-server:functional-136601 --alsologtostderr: (2.171666208s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-136601
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-136601: exit status 1 (19.439156ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-136601

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-136601

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (2.21s)

                                                
                                    
x
+
TestPreload (142.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-687309 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1208 23:55:53.452206  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:56:12.274792  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-687309 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m26.584706642s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-687309 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-687309 image pull gcr.io/k8s-minikube/busybox: (3.445312012s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-687309
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-687309: (7.045647827s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-687309 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-687309 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (42.874733645s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-687309 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-08 23:57:19.039355584 +0000 UTC m=+3247.421404220
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-687309 -n test-preload-687309
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-687309 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-823416 ssh -n multinode-823416-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:44 UTC │ 08 Dec 25 23:44 UTC │
	│ ssh     │ multinode-823416 ssh -n multinode-823416 sudo cat /home/docker/cp-test_multinode-823416-m03_multinode-823416.txt                                          │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:44 UTC │ 08 Dec 25 23:44 UTC │
	│ cp      │ multinode-823416 cp multinode-823416-m03:/home/docker/cp-test.txt multinode-823416-m02:/home/docker/cp-test_multinode-823416-m03_multinode-823416-m02.txt │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:44 UTC │ 08 Dec 25 23:44 UTC │
	│ ssh     │ multinode-823416 ssh -n multinode-823416-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:44 UTC │ 08 Dec 25 23:44 UTC │
	│ ssh     │ multinode-823416 ssh -n multinode-823416-m02 sudo cat /home/docker/cp-test_multinode-823416-m03_multinode-823416-m02.txt                                  │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:44 UTC │ 08 Dec 25 23:44 UTC │
	│ node    │ multinode-823416 node stop m03                                                                                                                            │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:44 UTC │ 08 Dec 25 23:44 UTC │
	│ node    │ multinode-823416 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:44 UTC │ 08 Dec 25 23:45 UTC │
	│ node    │ list -p multinode-823416                                                                                                                                  │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:45 UTC │                     │
	│ stop    │ -p multinode-823416                                                                                                                                       │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:45 UTC │ 08 Dec 25 23:47 UTC │
	│ start   │ -p multinode-823416 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:47 UTC │ 08 Dec 25 23:49 UTC │
	│ node    │ list -p multinode-823416                                                                                                                                  │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:49 UTC │                     │
	│ node    │ multinode-823416 node delete m03                                                                                                                          │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:49 UTC │ 08 Dec 25 23:50 UTC │
	│ stop    │ multinode-823416 stop                                                                                                                                     │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:50 UTC │ 08 Dec 25 23:52 UTC │
	│ start   │ -p multinode-823416 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:52 UTC │ 08 Dec 25 23:54 UTC │
	│ node    │ list -p multinode-823416                                                                                                                                  │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:54 UTC │                     │
	│ start   │ -p multinode-823416-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-823416-m02 │ jenkins │ v1.37.0 │ 08 Dec 25 23:54 UTC │                     │
	│ start   │ -p multinode-823416-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-823416-m03 │ jenkins │ v1.37.0 │ 08 Dec 25 23:54 UTC │ 08 Dec 25 23:54 UTC │
	│ node    │ add -p multinode-823416                                                                                                                                   │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:54 UTC │                     │
	│ delete  │ -p multinode-823416-m03                                                                                                                                   │ multinode-823416-m03 │ jenkins │ v1.37.0 │ 08 Dec 25 23:54 UTC │ 08 Dec 25 23:54 UTC │
	│ delete  │ -p multinode-823416                                                                                                                                       │ multinode-823416     │ jenkins │ v1.37.0 │ 08 Dec 25 23:54 UTC │ 08 Dec 25 23:54 UTC │
	│ start   │ -p test-preload-687309 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-687309  │ jenkins │ v1.37.0 │ 08 Dec 25 23:54 UTC │ 08 Dec 25 23:56 UTC │
	│ image   │ test-preload-687309 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-687309  │ jenkins │ v1.37.0 │ 08 Dec 25 23:56 UTC │ 08 Dec 25 23:56 UTC │
	│ stop    │ -p test-preload-687309                                                                                                                                    │ test-preload-687309  │ jenkins │ v1.37.0 │ 08 Dec 25 23:56 UTC │ 08 Dec 25 23:56 UTC │
	│ start   │ -p test-preload-687309 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-687309  │ jenkins │ v1.37.0 │ 08 Dec 25 23:56 UTC │ 08 Dec 25 23:57 UTC │
	│ image   │ test-preload-687309 image list                                                                                                                            │ test-preload-687309  │ jenkins │ v1.37.0 │ 08 Dec 25 23:57 UTC │ 08 Dec 25 23:57 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 23:56:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 23:56:36.016696  774530 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:56:36.016968  774530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:56:36.016978  774530 out.go:374] Setting ErrFile to fd 2...
	I1208 23:56:36.016983  774530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:56:36.017198  774530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:56:36.017641  774530 out.go:368] Setting JSON to false
	I1208 23:56:36.018594  774530 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9536,"bootTime":1765228660,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:56:36.018652  774530 start.go:143] virtualization: kvm guest
	I1208 23:56:36.021384  774530 out.go:179] * [test-preload-687309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 23:56:36.022668  774530 out.go:179]   - MINIKUBE_LOCATION=22075
	I1208 23:56:36.022706  774530 notify.go:221] Checking for updates...
	I1208 23:56:36.024914  774530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:56:36.025981  774530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:56:36.026987  774530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:56:36.028006  774530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 23:56:36.029018  774530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 23:56:36.030758  774530 config.go:182] Loaded profile config "test-preload-687309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:56:36.031497  774530 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:56:36.067697  774530 out.go:179] * Using the kvm2 driver based on existing profile
	I1208 23:56:36.068871  774530 start.go:309] selected driver: kvm2
	I1208 23:56:36.068889  774530 start.go:927] validating driver "kvm2" against &{Name:test-preload-687309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-687309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:56:36.068984  774530 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 23:56:36.069872  774530 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 23:56:36.069899  774530 cni.go:84] Creating CNI manager for ""
	I1208 23:56:36.069959  774530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:56:36.070003  774530 start.go:353] cluster config:
	{Name:test-preload-687309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-687309 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:56:36.070101  774530 iso.go:125] acquiring lock: {Name:mk3f3df5ef11b93dcc62a5800b46f2775cc6cbb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 23:56:36.071492  774530 out.go:179] * Starting "test-preload-687309" primary control-plane node in "test-preload-687309" cluster
	I1208 23:56:36.072551  774530 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 23:56:36.072582  774530 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1208 23:56:36.072592  774530 cache.go:65] Caching tarball of preloaded images
	I1208 23:56:36.072680  774530 preload.go:238] Found /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1208 23:56:36.072697  774530 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 23:56:36.072775  774530 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/config.json ...
	I1208 23:56:36.072980  774530 start.go:360] acquireMachinesLock for test-preload-687309: {Name:mk9f5a36f0f03c819637fd3ede2b02dca808c533 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1208 23:56:36.073025  774530 start.go:364] duration metric: took 25.283µs to acquireMachinesLock for "test-preload-687309"
	I1208 23:56:36.073040  774530 start.go:96] Skipping create...Using existing machine configuration
	I1208 23:56:36.073045  774530 fix.go:54] fixHost starting: 
	I1208 23:56:36.074900  774530 fix.go:112] recreateIfNeeded on test-preload-687309: state=Stopped err=<nil>
	W1208 23:56:36.074934  774530 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 23:56:36.076382  774530 out.go:252] * Restarting existing kvm2 VM for "test-preload-687309" ...
	I1208 23:56:36.076412  774530 main.go:143] libmachine: starting domain...
	I1208 23:56:36.076421  774530 main.go:143] libmachine: ensuring networks are active...
	I1208 23:56:36.077221  774530 main.go:143] libmachine: Ensuring network default is active
	I1208 23:56:36.077613  774530 main.go:143] libmachine: Ensuring network mk-test-preload-687309 is active
	I1208 23:56:36.078075  774530 main.go:143] libmachine: getting domain XML...
	I1208 23:56:36.079188  774530 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-687309</name>
	  <uuid>900d84eb-e057-4d7d-9312-719289cd12a5</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/test-preload-687309/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/test-preload-687309/test-preload-687309.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:dd:e3:00'/>
	      <source network='mk-test-preload-687309'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:8a:a9:52'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1208 23:56:37.336939  774530 main.go:143] libmachine: waiting for domain to start...
	I1208 23:56:37.338346  774530 main.go:143] libmachine: domain is now running
	I1208 23:56:37.338377  774530 main.go:143] libmachine: waiting for IP...
	I1208 23:56:37.339132  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:37.339848  774530 main.go:143] libmachine: domain test-preload-687309 has current primary IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:37.339865  774530 main.go:143] libmachine: found domain IP: 192.168.39.114
	I1208 23:56:37.339873  774530 main.go:143] libmachine: reserving static IP address...
	I1208 23:56:37.340281  774530 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-687309", mac: "52:54:00:dd:e3:00", ip: "192.168.39.114"} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:55:13 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:37.340304  774530 main.go:143] libmachine: skip adding static IP to network mk-test-preload-687309 - found existing host DHCP lease matching {name: "test-preload-687309", mac: "52:54:00:dd:e3:00", ip: "192.168.39.114"}
	I1208 23:56:37.340313  774530 main.go:143] libmachine: reserved static IP address 192.168.39.114 for domain test-preload-687309
	I1208 23:56:37.340320  774530 main.go:143] libmachine: waiting for SSH...
	I1208 23:56:37.340325  774530 main.go:143] libmachine: Getting to WaitForSSH function...
	I1208 23:56:37.342591  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:37.342922  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:55:13 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:37.342942  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:37.343092  774530 main.go:143] libmachine: Using SSH client type: native
	I1208 23:56:37.343308  774530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1208 23:56:37.343319  774530 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1208 23:56:40.440652  774530 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.114:22: connect: no route to host
	I1208 23:56:46.520690  774530 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.114:22: connect: no route to host
	I1208 23:56:49.630859  774530 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 23:56:49.634504  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.634934  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:49.634976  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.635220  774530 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/config.json ...
	I1208 23:56:49.635481  774530 machine.go:94] provisionDockerMachine start ...
	I1208 23:56:49.637632  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.637946  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:49.637991  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.638201  774530 main.go:143] libmachine: Using SSH client type: native
	I1208 23:56:49.638437  774530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1208 23:56:49.638450  774530 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 23:56:49.745764  774530 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1208 23:56:49.745793  774530 buildroot.go:166] provisioning hostname "test-preload-687309"
	I1208 23:56:49.748868  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.749325  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:49.749349  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.749545  774530 main.go:143] libmachine: Using SSH client type: native
	I1208 23:56:49.749791  774530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1208 23:56:49.749805  774530 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-687309 && echo "test-preload-687309" | sudo tee /etc/hostname
	I1208 23:56:49.873149  774530 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-687309
	
	I1208 23:56:49.875960  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.876460  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:49.876496  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.876734  774530 main.go:143] libmachine: Using SSH client type: native
	I1208 23:56:49.876979  774530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1208 23:56:49.876998  774530 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-687309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-687309/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-687309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 23:56:49.992892  774530 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 23:56:49.992921  774530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1208 23:56:49.992947  774530 buildroot.go:174] setting up certificates
	I1208 23:56:49.992958  774530 provision.go:84] configureAuth start
	I1208 23:56:49.995981  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.996401  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:49.996438  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.998671  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.998997  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:49.999024  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:49.999148  774530 provision.go:143] copyHostCerts
	I1208 23:56:49.999196  774530 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem, removing ...
	I1208 23:56:49.999206  774530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem
	I1208 23:56:49.999278  774530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1208 23:56:49.999396  774530 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem, removing ...
	I1208 23:56:49.999405  774530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem
	I1208 23:56:49.999445  774530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1208 23:56:49.999505  774530 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem, removing ...
	I1208 23:56:49.999514  774530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem
	I1208 23:56:49.999538  774530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1208 23:56:49.999586  774530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.test-preload-687309 san=[127.0.0.1 192.168.39.114 localhost minikube test-preload-687309]
	I1208 23:56:50.097348  774530 provision.go:177] copyRemoteCerts
	I1208 23:56:50.097415  774530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 23:56:50.099863  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.100316  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:50.100352  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.100500  774530 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/test-preload-687309/id_rsa Username:docker}
	I1208 23:56:50.189466  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 23:56:50.221283  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1208 23:56:50.252920  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 23:56:50.279664  774530 provision.go:87] duration metric: took 286.688603ms to configureAuth
	I1208 23:56:50.279706  774530 buildroot.go:189] setting minikube options for container-runtime
	I1208 23:56:50.279937  774530 config.go:182] Loaded profile config "test-preload-687309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:56:50.282761  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.283226  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:50.283256  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.283487  774530 main.go:143] libmachine: Using SSH client type: native
	I1208 23:56:50.283771  774530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1208 23:56:50.283796  774530 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 23:56:50.518959  774530 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 23:56:50.519003  774530 machine.go:97] duration metric: took 883.499684ms to provisionDockerMachine
	I1208 23:56:50.519023  774530 start.go:293] postStartSetup for "test-preload-687309" (driver="kvm2")
	I1208 23:56:50.519038  774530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 23:56:50.519125  774530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 23:56:50.522223  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.522638  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:50.522661  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.522778  774530 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/test-preload-687309/id_rsa Username:docker}
	I1208 23:56:50.606746  774530 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 23:56:50.611305  774530 info.go:137] Remote host: Buildroot 2025.02
	I1208 23:56:50.611333  774530 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1208 23:56:50.611429  774530 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1208 23:56:50.611524  774530 filesync.go:149] local asset: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem -> 7489302.pem in /etc/ssl/certs
	I1208 23:56:50.611619  774530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 23:56:50.622546  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /etc/ssl/certs/7489302.pem (1708 bytes)
	I1208 23:56:50.649525  774530 start.go:296] duration metric: took 130.482256ms for postStartSetup
	I1208 23:56:50.649580  774530 fix.go:56] duration metric: took 14.576531926s for fixHost
	I1208 23:56:50.651990  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.652427  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:50.652463  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.652618  774530 main.go:143] libmachine: Using SSH client type: native
	I1208 23:56:50.652819  774530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1208 23:56:50.652828  774530 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1208 23:56:50.764613  774530 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765238210.717803224
	
	I1208 23:56:50.764638  774530 fix.go:216] guest clock: 1765238210.717803224
	I1208 23:56:50.764648  774530 fix.go:229] Guest: 2025-12-08 23:56:50.717803224 +0000 UTC Remote: 2025-12-08 23:56:50.649584626 +0000 UTC m=+14.683444500 (delta=68.218598ms)
	I1208 23:56:50.764672  774530 fix.go:200] guest clock delta is within tolerance: 68.218598ms
	I1208 23:56:50.764679  774530 start.go:83] releasing machines lock for "test-preload-687309", held for 14.691644277s
	I1208 23:56:50.767757  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.768165  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:50.768203  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.768796  774530 ssh_runner.go:195] Run: cat /version.json
	I1208 23:56:50.768900  774530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 23:56:50.772048  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.772199  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.772502  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:50.772564  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:50.772597  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.772627  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:50.772786  774530 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/test-preload-687309/id_rsa Username:docker}
	I1208 23:56:50.772911  774530 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/test-preload-687309/id_rsa Username:docker}
	I1208 23:56:50.877015  774530 ssh_runner.go:195] Run: systemctl --version
	I1208 23:56:50.882942  774530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 23:56:51.033524  774530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 23:56:51.040834  774530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 23:56:51.040901  774530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 23:56:51.059689  774530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 23:56:51.059713  774530 start.go:496] detecting cgroup driver to use...
	I1208 23:56:51.059780  774530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 23:56:51.078016  774530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 23:56:51.099819  774530 docker.go:218] disabling cri-docker service (if available) ...
	I1208 23:56:51.099875  774530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 23:56:51.120382  774530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 23:56:51.139493  774530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 23:56:51.293951  774530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 23:56:51.507401  774530 docker.go:234] disabling docker service ...
	I1208 23:56:51.507475  774530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 23:56:51.523308  774530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 23:56:51.537696  774530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 23:56:51.690886  774530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 23:56:51.830708  774530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 23:56:51.846163  774530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 23:56:51.867632  774530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 23:56:51.867700  774530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:56:51.879339  774530 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 23:56:51.879457  774530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:56:51.891428  774530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:56:51.903416  774530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:56:51.915678  774530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 23:56:51.928188  774530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:56:51.939569  774530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:56:51.957998  774530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 23:56:51.969286  774530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 23:56:51.979173  774530 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1208 23:56:51.979230  774530 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1208 23:56:51.998631  774530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 23:56:52.009697  774530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 23:56:52.141912  774530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 23:56:52.244051  774530 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 23:56:52.244138  774530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 23:56:52.249306  774530 start.go:564] Will wait 60s for crictl version
	I1208 23:56:52.249384  774530 ssh_runner.go:195] Run: which crictl
	I1208 23:56:52.253070  774530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 23:56:52.283397  774530 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1208 23:56:52.283494  774530 ssh_runner.go:195] Run: crio --version
	I1208 23:56:52.311923  774530 ssh_runner.go:195] Run: crio --version
	I1208 23:56:52.341640  774530 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1208 23:56:52.345929  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:52.346298  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:56:52.346324  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:56:52.346502  774530 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1208 23:56:52.350894  774530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 23:56:52.365173  774530 kubeadm.go:884] updating cluster {Name:test-preload-687309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-687309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 23:56:52.365303  774530 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 23:56:52.365393  774530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 23:56:52.396783  774530 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1208 23:56:52.396876  774530 ssh_runner.go:195] Run: which lz4
	I1208 23:56:52.401343  774530 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1208 23:56:52.406229  774530 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1208 23:56:52.406273  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1208 23:56:53.590007  774530 crio.go:462] duration metric: took 1.188728146s to copy over tarball
	I1208 23:56:53.590095  774530 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1208 23:56:55.094129  774530 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.504005874s)
	I1208 23:56:55.094187  774530 crio.go:469] duration metric: took 1.504138176s to extract the tarball
	I1208 23:56:55.094196  774530 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1208 23:56:55.138866  774530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 23:56:55.181831  774530 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 23:56:55.181858  774530 cache_images.go:86] Images are preloaded, skipping loading
	I1208 23:56:55.181866  774530 kubeadm.go:935] updating node { 192.168.39.114 8443 v1.34.2 crio true true} ...
	I1208 23:56:55.181968  774530 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-687309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-687309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 23:56:55.182043  774530 ssh_runner.go:195] Run: crio config
	I1208 23:56:55.229917  774530 cni.go:84] Creating CNI manager for ""
	I1208 23:56:55.229945  774530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:56:55.229963  774530 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 23:56:55.229993  774530 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-687309 NodeName:test-preload-687309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 23:56:55.230176  774530 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-687309"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 23:56:55.230272  774530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 23:56:55.243002  774530 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 23:56:55.243077  774530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 23:56:55.254252  774530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1208 23:56:55.274221  774530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 23:56:55.294559  774530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1208 23:56:55.314899  774530 ssh_runner.go:195] Run: grep 192.168.39.114	control-plane.minikube.internal$ /etc/hosts
	I1208 23:56:55.319055  774530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 23:56:55.334122  774530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 23:56:55.473994  774530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 23:56:55.514934  774530 certs.go:69] Setting up /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309 for IP: 192.168.39.114
	I1208 23:56:55.514967  774530 certs.go:195] generating shared ca certs ...
	I1208 23:56:55.514992  774530 certs.go:227] acquiring lock for ca certs: {Name:mk069bbba4d83d251409b18022ca36eb869d942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:56:55.515212  774530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key
	I1208 23:56:55.515296  774530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key
	I1208 23:56:55.515315  774530 certs.go:257] generating profile certs ...
	I1208 23:56:55.515473  774530 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/client.key
	I1208 23:56:55.515579  774530 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/apiserver.key.d39b4146
	I1208 23:56:55.515655  774530 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/proxy-client.key
	I1208 23:56:55.515810  774530 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem (1338 bytes)
	W1208 23:56:55.515866  774530 certs.go:480] ignoring /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930_empty.pem, impossibly tiny 0 bytes
	I1208 23:56:55.515880  774530 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 23:56:55.515922  774530 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem (1082 bytes)
	I1208 23:56:55.515965  774530 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem (1123 bytes)
	I1208 23:56:55.515999  774530 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem (1675 bytes)
	I1208 23:56:55.516067  774530 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem (1708 bytes)
	I1208 23:56:55.517631  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 23:56:55.556764  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 23:56:55.594179  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 23:56:55.622486  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1208 23:56:55.649778  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1208 23:56:55.677103  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 23:56:55.703651  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 23:56:55.730884  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 23:56:55.758828  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 23:56:55.786142  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem --> /usr/share/ca-certificates/748930.pem (1338 bytes)
	I1208 23:56:55.812704  774530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /usr/share/ca-certificates/7489302.pem (1708 bytes)
	I1208 23:56:55.838967  774530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 23:56:55.857805  774530 ssh_runner.go:195] Run: openssl version
	I1208 23:56:55.863731  774530 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/748930.pem
	I1208 23:56:55.874199  774530 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/748930.pem /etc/ssl/certs/748930.pem
	I1208 23:56:55.884798  774530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748930.pem
	I1208 23:56:55.889529  774530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 23:15 /usr/share/ca-certificates/748930.pem
	I1208 23:56:55.889588  774530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748930.pem
	I1208 23:56:55.896204  774530 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 23:56:55.907237  774530 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/748930.pem /etc/ssl/certs/51391683.0
	I1208 23:56:55.918280  774530 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7489302.pem
	I1208 23:56:55.929332  774530 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7489302.pem /etc/ssl/certs/7489302.pem
	I1208 23:56:55.940616  774530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7489302.pem
	I1208 23:56:55.945299  774530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 23:15 /usr/share/ca-certificates/7489302.pem
	I1208 23:56:55.945345  774530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7489302.pem
	I1208 23:56:55.951811  774530 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 23:56:55.962107  774530 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7489302.pem /etc/ssl/certs/3ec20f2e.0
	I1208 23:56:55.972647  774530 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 23:56:55.983083  774530 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 23:56:55.994164  774530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 23:56:55.998849  774530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I1208 23:56:55.998907  774530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 23:56:56.005455  774530 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 23:56:56.015892  774530 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 23:56:56.026750  774530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 23:56:56.031481  774530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 23:56:56.038175  774530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 23:56:56.044798  774530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 23:56:56.051843  774530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 23:56:56.058609  774530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 23:56:56.065253  774530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 23:56:56.072012  774530 kubeadm.go:401] StartCluster: {Name:test-preload-687309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-687309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:56:56.072089  774530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 23:56:56.072151  774530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 23:56:56.103207  774530 cri.go:89] found id: ""
	I1208 23:56:56.103295  774530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 23:56:56.115291  774530 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 23:56:56.115314  774530 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 23:56:56.115375  774530 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 23:56:56.126668  774530 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 23:56:56.127108  774530 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-687309" does not appear in /home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:56:56.127204  774530 kubeconfig.go:62] /home/jenkins/minikube-integration/22075-744871/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-687309" cluster setting kubeconfig missing "test-preload-687309" context setting]
	I1208 23:56:56.127550  774530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/kubeconfig: {Name:mk0db57d03f858808a26818547681e8d59b0a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:56:56.128106  774530 kapi.go:59] client config for test-preload-687309: &rest.Config{Host:"https://192.168.39.114:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/client.crt", KeyFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/client.key", CAFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 23:56:56.128608  774530 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 23:56:56.128628  774530 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 23:56:56.128632  774530 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 23:56:56.128636  774530 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 23:56:56.128640  774530 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 23:56:56.129010  774530 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 23:56:56.139609  774530 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.114
	I1208 23:56:56.139641  774530 kubeadm.go:1161] stopping kube-system containers ...
	I1208 23:56:56.139657  774530 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1208 23:56:56.139709  774530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 23:56:56.171090  774530 cri.go:89] found id: ""
	I1208 23:56:56.171177  774530 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 23:56:56.189594  774530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 23:56:56.200491  774530 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 23:56:56.200507  774530 kubeadm.go:158] found existing configuration files:
	
	I1208 23:56:56.200547  774530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 23:56:56.210532  774530 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 23:56:56.210581  774530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 23:56:56.220906  774530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 23:56:56.230471  774530 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 23:56:56.230520  774530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 23:56:56.241167  774530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 23:56:56.251079  774530 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 23:56:56.251141  774530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 23:56:56.262346  774530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 23:56:56.272079  774530 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 23:56:56.272126  774530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 23:56:56.282855  774530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 23:56:56.293624  774530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 23:56:56.342897  774530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 23:56:57.317806  774530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 23:56:57.573349  774530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 23:56:57.642624  774530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 23:56:57.731447  774530 api_server.go:52] waiting for apiserver process to appear ...
	I1208 23:56:57.731535  774530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:56:58.232588  774530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:56:58.732544  774530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:56:59.231660  774530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:56:59.731784  774530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:56:59.778521  774530 api_server.go:72] duration metric: took 2.047083215s to wait for apiserver process to appear ...
	I1208 23:56:59.778558  774530 api_server.go:88] waiting for apiserver healthz status ...
	I1208 23:56:59.778578  774530 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1208 23:56:59.779146  774530 api_server.go:269] stopped: https://192.168.39.114:8443/healthz: Get "https://192.168.39.114:8443/healthz": dial tcp 192.168.39.114:8443: connect: connection refused
	I1208 23:57:00.278849  774530 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1208 23:57:02.211928  774530 api_server.go:279] https://192.168.39.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1208 23:57:02.211964  774530 api_server.go:103] status: https://192.168.39.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1208 23:57:02.211983  774530 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1208 23:57:02.244592  774530 api_server.go:279] https://192.168.39.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1208 23:57:02.244618  774530 api_server.go:103] status: https://192.168.39.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1208 23:57:02.278820  774530 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1208 23:57:02.291965  774530 api_server.go:279] https://192.168.39.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 23:57:02.291996  774530 api_server.go:103] status: https://192.168.39.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 23:57:02.778656  774530 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1208 23:57:02.789569  774530 api_server.go:279] https://192.168.39.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 23:57:02.789597  774530 api_server.go:103] status: https://192.168.39.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 23:57:03.279312  774530 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1208 23:57:03.287798  774530 api_server.go:279] https://192.168.39.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 23:57:03.287823  774530 api_server.go:103] status: https://192.168.39.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 23:57:03.779583  774530 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1208 23:57:03.784841  774530 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1208 23:57:03.795427  774530 api_server.go:141] control plane version: v1.34.2
	I1208 23:57:03.795470  774530 api_server.go:131] duration metric: took 4.016904063s to wait for apiserver health ...
	I1208 23:57:03.795485  774530 cni.go:84] Creating CNI manager for ""
	I1208 23:57:03.795495  774530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:57:03.797441  774530 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1208 23:57:03.798950  774530 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1208 23:57:03.818161  774530 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1208 23:57:03.855157  774530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 23:57:03.860167  774530 system_pods.go:59] 7 kube-system pods found
	I1208 23:57:03.860214  774530 system_pods.go:61] "coredns-66bc5c9577-8ccz5" [2ec1da29-2313-4d2d-ba93-3a9b04a9edf8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 23:57:03.860235  774530 system_pods.go:61] "etcd-test-preload-687309" [9f0fccf7-cbaa-40ca-91f2-dfc81c5c4015] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 23:57:03.860248  774530 system_pods.go:61] "kube-apiserver-test-preload-687309" [8123fd82-352d-4b45-aa4e-5cf28d31d16a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 23:57:03.860254  774530 system_pods.go:61] "kube-controller-manager-test-preload-687309" [5ef88f13-095f-4995-804a-48edf6b5f698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 23:57:03.860259  774530 system_pods.go:61] "kube-proxy-cdgq4" [432af2a8-2f4e-4cf2-ba0c-2c728c036c9b] Running
	I1208 23:57:03.860264  774530 system_pods.go:61] "kube-scheduler-test-preload-687309" [c23941e3-0fd0-4928-81ac-83c039e944f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 23:57:03.860269  774530 system_pods.go:61] "storage-provisioner" [f2ce4fed-cb49-42c2-b38a-bee7629bb151] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 23:57:03.860278  774530 system_pods.go:74] duration metric: took 5.087011ms to wait for pod list to return data ...
	I1208 23:57:03.860286  774530 node_conditions.go:102] verifying NodePressure condition ...
	I1208 23:57:03.866910  774530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1208 23:57:03.866950  774530 node_conditions.go:123] node cpu capacity is 2
	I1208 23:57:03.866970  774530 node_conditions.go:105] duration metric: took 6.678169ms to run NodePressure ...
	I1208 23:57:03.867038  774530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 23:57:04.126108  774530 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1208 23:57:04.129746  774530 kubeadm.go:744] kubelet initialised
	I1208 23:57:04.129769  774530 kubeadm.go:745] duration metric: took 3.636972ms waiting for restarted kubelet to initialise ...
	I1208 23:57:04.129785  774530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 23:57:04.144963  774530 ops.go:34] apiserver oom_adj: -16
	I1208 23:57:04.144984  774530 kubeadm.go:602] duration metric: took 8.029662323s to restartPrimaryControlPlane
	I1208 23:57:04.144996  774530 kubeadm.go:403] duration metric: took 8.072992135s to StartCluster
	I1208 23:57:04.145020  774530 settings.go:142] acquiring lock: {Name:mk01a7d116accfccda14c363bded9d7c0216d454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:57:04.145103  774530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:57:04.145826  774530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/kubeconfig: {Name:mk0db57d03f858808a26818547681e8d59b0a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:57:04.146083  774530 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 23:57:04.146186  774530 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 23:57:04.146277  774530 config.go:182] Loaded profile config "test-preload-687309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:57:04.146301  774530 addons.go:70] Setting storage-provisioner=true in profile "test-preload-687309"
	I1208 23:57:04.146329  774530 addons.go:239] Setting addon storage-provisioner=true in "test-preload-687309"
	W1208 23:57:04.146342  774530 addons.go:248] addon storage-provisioner should already be in state true
	I1208 23:57:04.146352  774530 addons.go:70] Setting default-storageclass=true in profile "test-preload-687309"
	I1208 23:57:04.146386  774530 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-687309"
	I1208 23:57:04.146391  774530 host.go:66] Checking if "test-preload-687309" exists ...
	I1208 23:57:04.147680  774530 out.go:179] * Verifying Kubernetes components...
	I1208 23:57:04.148795  774530 kapi.go:59] client config for test-preload-687309: &rest.Config{Host:"https://192.168.39.114:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/client.crt", KeyFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/client.key", CAFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 23:57:04.148987  774530 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 23:57:04.149036  774530 addons.go:239] Setting addon default-storageclass=true in "test-preload-687309"
	W1208 23:57:04.149048  774530 addons.go:248] addon default-storageclass should already be in state true
	I1208 23:57:04.149065  774530 host.go:66] Checking if "test-preload-687309" exists ...
	I1208 23:57:04.148991  774530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 23:57:04.150198  774530 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 23:57:04.150234  774530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 23:57:04.150651  774530 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 23:57:04.150666  774530 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 23:57:04.153174  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:57:04.153478  774530 main.go:143] libmachine: domain test-preload-687309 has defined MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:57:04.153556  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:57:04.153584  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:57:04.153725  774530 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/test-preload-687309/id_rsa Username:docker}
	I1208 23:57:04.153859  774530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:e3:00", ip: ""} in network mk-test-preload-687309: {Iface:virbr1 ExpiryTime:2025-12-09 00:56:46 +0000 UTC Type:0 Mac:52:54:00:dd:e3:00 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:test-preload-687309 Clientid:01:52:54:00:dd:e3:00}
	I1208 23:57:04.153888  774530 main.go:143] libmachine: domain test-preload-687309 has defined IP address 192.168.39.114 and MAC address 52:54:00:dd:e3:00 in network mk-test-preload-687309
	I1208 23:57:04.154015  774530 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/test-preload-687309/id_rsa Username:docker}
	I1208 23:57:04.369764  774530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 23:57:04.391849  774530 node_ready.go:35] waiting up to 6m0s for node "test-preload-687309" to be "Ready" ...
	I1208 23:57:04.498237  774530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 23:57:04.504979  774530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 23:57:05.147434  774530 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1208 23:57:05.148539  774530 addons.go:530] duration metric: took 1.002354686s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1208 23:57:06.395539  774530 node_ready.go:57] node "test-preload-687309" has "Ready":"False" status (will retry)
	W1208 23:57:08.895406  774530 node_ready.go:57] node "test-preload-687309" has "Ready":"False" status (will retry)
	W1208 23:57:10.896516  774530 node_ready.go:57] node "test-preload-687309" has "Ready":"False" status (will retry)
	I1208 23:57:12.895457  774530 node_ready.go:49] node "test-preload-687309" is "Ready"
	I1208 23:57:12.895492  774530 node_ready.go:38] duration metric: took 8.50359414s for node "test-preload-687309" to be "Ready" ...
	I1208 23:57:12.895510  774530 api_server.go:52] waiting for apiserver process to appear ...
	I1208 23:57:12.895571  774530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:57:12.914604  774530 api_server.go:72] duration metric: took 8.768474582s to wait for apiserver process to appear ...
	I1208 23:57:12.914639  774530 api_server.go:88] waiting for apiserver healthz status ...
	I1208 23:57:12.914662  774530 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1208 23:57:12.919843  774530 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1208 23:57:12.920698  774530 api_server.go:141] control plane version: v1.34.2
	I1208 23:57:12.920724  774530 api_server.go:131] duration metric: took 6.075514ms to wait for apiserver health ...
	I1208 23:57:12.920734  774530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 23:57:12.924731  774530 system_pods.go:59] 7 kube-system pods found
	I1208 23:57:12.924755  774530 system_pods.go:61] "coredns-66bc5c9577-8ccz5" [2ec1da29-2313-4d2d-ba93-3a9b04a9edf8] Running
	I1208 23:57:12.924764  774530 system_pods.go:61] "etcd-test-preload-687309" [9f0fccf7-cbaa-40ca-91f2-dfc81c5c4015] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 23:57:12.924768  774530 system_pods.go:61] "kube-apiserver-test-preload-687309" [8123fd82-352d-4b45-aa4e-5cf28d31d16a] Running
	I1208 23:57:12.924775  774530 system_pods.go:61] "kube-controller-manager-test-preload-687309" [5ef88f13-095f-4995-804a-48edf6b5f698] Running
	I1208 23:57:12.924778  774530 system_pods.go:61] "kube-proxy-cdgq4" [432af2a8-2f4e-4cf2-ba0c-2c728c036c9b] Running
	I1208 23:57:12.924782  774530 system_pods.go:61] "kube-scheduler-test-preload-687309" [c23941e3-0fd0-4928-81ac-83c039e944f1] Running
	I1208 23:57:12.924785  774530 system_pods.go:61] "storage-provisioner" [f2ce4fed-cb49-42c2-b38a-bee7629bb151] Running
	I1208 23:57:12.924789  774530 system_pods.go:74] duration metric: took 4.050204ms to wait for pod list to return data ...
	I1208 23:57:12.924799  774530 default_sa.go:34] waiting for default service account to be created ...
	I1208 23:57:12.927550  774530 default_sa.go:45] found service account: "default"
	I1208 23:57:12.927566  774530 default_sa.go:55] duration metric: took 2.762989ms for default service account to be created ...
	I1208 23:57:12.927574  774530 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 23:57:12.930285  774530 system_pods.go:86] 7 kube-system pods found
	I1208 23:57:12.930307  774530 system_pods.go:89] "coredns-66bc5c9577-8ccz5" [2ec1da29-2313-4d2d-ba93-3a9b04a9edf8] Running
	I1208 23:57:12.930315  774530 system_pods.go:89] "etcd-test-preload-687309" [9f0fccf7-cbaa-40ca-91f2-dfc81c5c4015] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 23:57:12.930320  774530 system_pods.go:89] "kube-apiserver-test-preload-687309" [8123fd82-352d-4b45-aa4e-5cf28d31d16a] Running
	I1208 23:57:12.930326  774530 system_pods.go:89] "kube-controller-manager-test-preload-687309" [5ef88f13-095f-4995-804a-48edf6b5f698] Running
	I1208 23:57:12.930332  774530 system_pods.go:89] "kube-proxy-cdgq4" [432af2a8-2f4e-4cf2-ba0c-2c728c036c9b] Running
	I1208 23:57:12.930335  774530 system_pods.go:89] "kube-scheduler-test-preload-687309" [c23941e3-0fd0-4928-81ac-83c039e944f1] Running
	I1208 23:57:12.930340  774530 system_pods.go:89] "storage-provisioner" [f2ce4fed-cb49-42c2-b38a-bee7629bb151] Running
	I1208 23:57:12.930345  774530 system_pods.go:126] duration metric: took 2.766116ms to wait for k8s-apps to be running ...
	I1208 23:57:12.930351  774530 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 23:57:12.930404  774530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 23:57:12.946049  774530 system_svc.go:56] duration metric: took 15.690704ms WaitForService to wait for kubelet
	I1208 23:57:12.946078  774530 kubeadm.go:587] duration metric: took 8.799955322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 23:57:12.946096  774530 node_conditions.go:102] verifying NodePressure condition ...
	I1208 23:57:12.948320  774530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1208 23:57:12.948337  774530 node_conditions.go:123] node cpu capacity is 2
	I1208 23:57:12.948349  774530 node_conditions.go:105] duration metric: took 2.249803ms to run NodePressure ...
	I1208 23:57:12.948374  774530 start.go:242] waiting for startup goroutines ...
	I1208 23:57:12.948384  774530 start.go:247] waiting for cluster config update ...
	I1208 23:57:12.948404  774530 start.go:256] writing updated cluster config ...
	I1208 23:57:12.948664  774530 ssh_runner.go:195] Run: rm -f paused
	I1208 23:57:12.953615  774530 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 23:57:12.954124  774530 kapi.go:59] client config for test-preload-687309: &rest.Config{Host:"https://192.168.39.114:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/client.crt", KeyFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/profiles/test-preload-687309/client.key", CAFile:"/home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 23:57:12.956941  774530 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8ccz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:12.961910  774530 pod_ready.go:94] pod "coredns-66bc5c9577-8ccz5" is "Ready"
	I1208 23:57:12.961934  774530 pod_ready.go:86] duration metric: took 4.970986ms for pod "coredns-66bc5c9577-8ccz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:12.964180  774530 pod_ready.go:83] waiting for pod "etcd-test-preload-687309" in "kube-system" namespace to be "Ready" or be gone ...
	W1208 23:57:14.970009  774530 pod_ready.go:104] pod "etcd-test-preload-687309" is not "Ready", error: <nil>
	W1208 23:57:16.970696  774530 pod_ready.go:104] pod "etcd-test-preload-687309" is not "Ready", error: <nil>
	I1208 23:57:17.970990  774530 pod_ready.go:94] pod "etcd-test-preload-687309" is "Ready"
	I1208 23:57:17.971027  774530 pod_ready.go:86] duration metric: took 5.006829013s for pod "etcd-test-preload-687309" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:17.973729  774530 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-687309" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:17.978265  774530 pod_ready.go:94] pod "kube-apiserver-test-preload-687309" is "Ready"
	I1208 23:57:17.978286  774530 pod_ready.go:86] duration metric: took 4.535494ms for pod "kube-apiserver-test-preload-687309" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:17.980659  774530 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-687309" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:17.984687  774530 pod_ready.go:94] pod "kube-controller-manager-test-preload-687309" is "Ready"
	I1208 23:57:17.984708  774530 pod_ready.go:86] duration metric: took 4.026171ms for pod "kube-controller-manager-test-preload-687309" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:17.986689  774530 pod_ready.go:83] waiting for pod "kube-proxy-cdgq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:18.167763  774530 pod_ready.go:94] pod "kube-proxy-cdgq4" is "Ready"
	I1208 23:57:18.167790  774530 pod_ready.go:86] duration metric: took 181.077332ms for pod "kube-proxy-cdgq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:18.368251  774530 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-687309" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:18.767849  774530 pod_ready.go:94] pod "kube-scheduler-test-preload-687309" is "Ready"
	I1208 23:57:18.767878  774530 pod_ready.go:86] duration metric: took 399.5981ms for pod "kube-scheduler-test-preload-687309" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 23:57:18.767891  774530 pod_ready.go:40] duration metric: took 5.814246882s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 23:57:18.811987  774530 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1208 23:57:18.813933  774530 out.go:179] * Done! kubectl is now configured to use "test-preload-687309" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.620220083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f09618d2-bcc7-4409-b869-5c7ed2e8b022 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.621368019Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e88279f0-3e1c-4a50-a3ac-a64ed64e5ba7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.622287783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765238239622264722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e88279f0-3e1c-4a50-a3ac-a64ed64e5ba7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.623279895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de7471a4-9428-4359-9570-3065037e1059 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.623327176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de7471a4-9428-4359-9570-3065037e1059 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.623519754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8de1d308bca6e87a9e3eb281e2deef54e0ba0c23c394d9f030eb08e370870e0,PodSandboxId:6db22604cc3d0e1903082f54458fe36364d641b45fad2cd274b1f2dbcb194ef6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765238230716669993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8ccz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec1da29-2313-4d2d-ba93-3a9b04a9edf8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9c85cc31adafd72a989188ca489404b5520f3188b8f5681db2cd7a23cf1bec,PodSandboxId:660428a214a8c826979ead1e0bdd921c9dca181ee941119c8258610f4a26fc71,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765238223039614193,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdgq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432af2a8-2f4e-4cf2-ba0c-2c728c036c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:445eea15e6c59f8e6ae7a676dbae71d518e5bc81d9cac490273121a78053792b,PodSandboxId:11e55f512adac066e0819fe4ac17b50ce7992e93a8cf8a11bbfbeda949092db4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765238223042013844,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2ce4fed-cb49-42c2-b38a-bee7629bb151,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c22ff5940f1949c825c04ef1f7ed3f4eb97b82bc56ba5502fe32bbfa04eceef,PodSandboxId:5a8e3fdb685ddc64ac308de1fdb9a266e95c792170b14232a014467048c6b01c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765238219503974448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0445fa1fa5bf0818d68e9d6cedaeec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0a6a2a5c272abea3ed00c88d78309fceb4262cae9cbe23d9235c11ed1b0141,PodSandboxId:b23bc9bb555712caaa5127c826ada81004d60a3a5c59b45d7324d1f803cade07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,Crea
tedAt:1765238219472660216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4cb138163f227c5561a44f5df4f37c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d43cee3b7286b9b69ae9308c3189b402eee6908dfa733861f09a7f2ea806eb44,PodSandboxId:105e3027cd14ece16583ea148f97ea5a2c36e4c09e50dc88046fc1c008b8f1c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765238219465902567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efb5bb866e087334abab732689f2da0,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ad694ccb2232dba4c2037616572618cdfe6887c3baaaf9fe1f736776f7f2cb,PodSandboxId:cbcd6ef81b30c7490646279188f5a7cd8f50218212688d512526b20ce47aa16f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765238219415913724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afed3ea57b9bd84657559630f9b1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de7471a4-9428-4359-9570-3065037e1059 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.656224936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53d1245d-a762-4863-8ea6-646d40337642 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.656295496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53d1245d-a762-4863-8ea6-646d40337642 name=/runtime.v1.RuntimeService/Version
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.657839550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8eb1cd2-41c2-430f-a0f9-fd454d295c6e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.658694167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765238239658635414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8eb1cd2-41c2-430f-a0f9-fd454d295c6e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.659772850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=350c76b7-f03d-458d-bf68-e54ac4fb6122 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.659823118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=350c76b7-f03d-458d-bf68-e54ac4fb6122 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.660045038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8de1d308bca6e87a9e3eb281e2deef54e0ba0c23c394d9f030eb08e370870e0,PodSandboxId:6db22604cc3d0e1903082f54458fe36364d641b45fad2cd274b1f2dbcb194ef6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765238230716669993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8ccz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec1da29-2313-4d2d-ba93-3a9b04a9edf8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9c85cc31adafd72a989188ca489404b5520f3188b8f5681db2cd7a23cf1bec,PodSandboxId:660428a214a8c826979ead1e0bdd921c9dca181ee941119c8258610f4a26fc71,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765238223039614193,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdgq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432af2a8-2f4e-4cf2-ba0c-2c728c036c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:445eea15e6c59f8e6ae7a676dbae71d518e5bc81d9cac490273121a78053792b,PodSandboxId:11e55f512adac066e0819fe4ac17b50ce7992e93a8cf8a11bbfbeda949092db4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765238223042013844,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2ce4fed-cb49-42c2-b38a-bee7629bb151,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c22ff5940f1949c825c04ef1f7ed3f4eb97b82bc56ba5502fe32bbfa04eceef,PodSandboxId:5a8e3fdb685ddc64ac308de1fdb9a266e95c792170b14232a014467048c6b01c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765238219503974448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0445fa1fa5bf0818d68e9d6cedaeec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0a6a2a5c272abea3ed00c88d78309fceb4262cae9cbe23d9235c11ed1b0141,PodSandboxId:b23bc9bb555712caaa5127c826ada81004d60a3a5c59b45d7324d1f803cade07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,Crea
tedAt:1765238219472660216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4cb138163f227c5561a44f5df4f37c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d43cee3b7286b9b69ae9308c3189b402eee6908dfa733861f09a7f2ea806eb44,PodSandboxId:105e3027cd14ece16583ea148f97ea5a2c36e4c09e50dc88046fc1c008b8f1c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765238219465902567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efb5bb866e087334abab732689f2da0,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ad694ccb2232dba4c2037616572618cdfe6887c3baaaf9fe1f736776f7f2cb,PodSandboxId:cbcd6ef81b30c7490646279188f5a7cd8f50218212688d512526b20ce47aa16f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765238219415913724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afed3ea57b9bd84657559630f9b1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=350c76b7-f03d-458d-bf68-e54ac4fb6122 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.688839947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f25a5189-96f6-43b4-b814-e8998ffd856f name=/runtime.v1.RuntimeService/Version
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.688969315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f25a5189-96f6-43b4-b814-e8998ffd856f name=/runtime.v1.RuntimeService/Version
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.689242948Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22e88c0e-f3d7-47e1-bee0-d36a0b08112e name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.689417438Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6db22604cc3d0e1903082f54458fe36364d641b45fad2cd274b1f2dbcb194ef6,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-8ccz5,Uid:2ec1da29-2313-4d2d-ba93-3a9b04a9edf8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765238230496530307,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-8ccz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec1da29-2313-4d2d-ba93-3a9b04a9edf8,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-08T23:57:02.611764405Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:660428a214a8c826979ead1e0bdd921c9dca181ee941119c8258610f4a26fc71,Metadata:&PodSandboxMetadata{Name:kube-proxy-cdgq4,Uid:432af2a8-2f4e-4cf2-ba0c-2c728c036c9b,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1765238222928599992,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-cdgq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432af2a8-2f4e-4cf2-ba0c-2c728c036c9b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-08T23:57:02.611760890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:11e55f512adac066e0819fe4ac17b50ce7992e93a8cf8a11bbfbeda949092db4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f2ce4fed-cb49-42c2-b38a-bee7629bb151,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765238222925777763,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2ce4fed-cb49-42c2-b38a-bee7
629bb151,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-08T23:57:02.611763198Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a8e3fdb685ddc64ac308de1fdb9a266e95c792170b14232a014467048c6b01c,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-687309,Uid:bb0445fa1fa5bf081
8d68e9d6cedaeec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765238219285751919,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0445fa1fa5bf0818d68e9d6cedaeec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.114:2379,kubernetes.io/config.hash: bb0445fa1fa5bf0818d68e9d6cedaeec,kubernetes.io/config.seen: 2025-12-08T23:56:57.683075092Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cbcd6ef81b30c7490646279188f5a7cd8f50218212688d512526b20ce47aa16f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-687309,Uid:62afed3ea57b9bd84657559630f9b1eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765238219256519465,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-pr
eload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afed3ea57b9bd84657559630f9b1eb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62afed3ea57b9bd84657559630f9b1eb,kubernetes.io/config.seen: 2025-12-08T23:56:57.610668440Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b23bc9bb555712caaa5127c826ada81004d60a3a5c59b45d7324d1f803cade07,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-687309,Uid:7f4cb138163f227c5561a44f5df4f37c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765238219252895803,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4cb138163f227c5561a44f5df4f37c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.114:8443,kubernetes.io/config.hash: 7f4cb138163f227
c5561a44f5df4f37c,kubernetes.io/config.seen: 2025-12-08T23:56:57.610661559Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:105e3027cd14ece16583ea148f97ea5a2c36e4c09e50dc88046fc1c008b8f1c1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-687309,Uid:9efb5bb866e087334abab732689f2da0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765238219252550005,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efb5bb866e087334abab732689f2da0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9efb5bb866e087334abab732689f2da0,kubernetes.io/config.seen: 2025-12-08T23:56:57.610667086Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=22e88c0e-f3d7-47e1-bee0-d36a0b08112e name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.690281937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d24097c5-7686-4ea7-a160-22577e8fa8b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.690373333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d24097c5-7686-4ea7-a160-22577e8fa8b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.690551059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8de1d308bca6e87a9e3eb281e2deef54e0ba0c23c394d9f030eb08e370870e0,PodSandboxId:6db22604cc3d0e1903082f54458fe36364d641b45fad2cd274b1f2dbcb194ef6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765238230716669993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8ccz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec1da29-2313-4d2d-ba93-3a9b04a9edf8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9c85cc31adafd72a989188ca489404b5520f3188b8f5681db2cd7a23cf1bec,PodSandboxId:660428a214a8c826979ead1e0bdd921c9dca181ee941119c8258610f4a26fc71,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765238223039614193,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdgq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432af2a8-2f4e-4cf2-ba0c-2c728c036c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:445eea15e6c59f8e6ae7a676dbae71d518e5bc81d9cac490273121a78053792b,PodSandboxId:11e55f512adac066e0819fe4ac17b50ce7992e93a8cf8a11bbfbeda949092db4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765238223042013844,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2ce4fed-cb49-42c2-b38a-bee7629bb151,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c22ff5940f1949c825c04ef1f7ed3f4eb97b82bc56ba5502fe32bbfa04eceef,PodSandboxId:5a8e3fdb685ddc64ac308de1fdb9a266e95c792170b14232a014467048c6b01c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765238219503974448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0445fa1fa5bf0818d68e9d6cedaeec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0a6a2a5c272abea3ed00c88d78309fceb4262cae9cbe23d9235c11ed1b0141,PodSandboxId:b23bc9bb555712caaa5127c826ada81004d60a3a5c59b45d7324d1f803cade07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,Crea
tedAt:1765238219472660216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4cb138163f227c5561a44f5df4f37c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d43cee3b7286b9b69ae9308c3189b402eee6908dfa733861f09a7f2ea806eb44,PodSandboxId:105e3027cd14ece16583ea148f97ea5a2c36e4c09e50dc88046fc1c008b8f1c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765238219465902567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efb5bb866e087334abab732689f2da0,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ad694ccb2232dba4c2037616572618cdfe6887c3baaaf9fe1f736776f7f2cb,PodSandboxId:cbcd6ef81b30c7490646279188f5a7cd8f50218212688d512526b20ce47aa16f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765238219415913724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afed3ea57b9bd84657559630f9b1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d24097c5-7686-4ea7-a160-22577e8fa8b9 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.691676743Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9da1cc44-21b7-4907-b72b-9d292006ae52 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.692067237Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765238239692050209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9da1cc44-21b7-4907-b72b-9d292006ae52 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.693075057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bac7652f-8173-4e27-bb84-c80b07c6777b name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.693247222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bac7652f-8173-4e27-bb84-c80b07c6777b name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 23:57:19 test-preload-687309 crio[840]: time="2025-12-08 23:57:19.693423302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8de1d308bca6e87a9e3eb281e2deef54e0ba0c23c394d9f030eb08e370870e0,PodSandboxId:6db22604cc3d0e1903082f54458fe36364d641b45fad2cd274b1f2dbcb194ef6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765238230716669993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8ccz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec1da29-2313-4d2d-ba93-3a9b04a9edf8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e9c85cc31adafd72a989188ca489404b5520f3188b8f5681db2cd7a23cf1bec,PodSandboxId:660428a214a8c826979ead1e0bdd921c9dca181ee941119c8258610f4a26fc71,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765238223039614193,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdgq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 432af2a8-2f4e-4cf2-ba0c-2c728c036c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:445eea15e6c59f8e6ae7a676dbae71d518e5bc81d9cac490273121a78053792b,PodSandboxId:11e55f512adac066e0819fe4ac17b50ce7992e93a8cf8a11bbfbeda949092db4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765238223042013844,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2ce4fed-cb49-42c2-b38a-bee7629bb151,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c22ff5940f1949c825c04ef1f7ed3f4eb97b82bc56ba5502fe32bbfa04eceef,PodSandboxId:5a8e3fdb685ddc64ac308de1fdb9a266e95c792170b14232a014467048c6b01c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765238219503974448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0445fa1fa5bf0818d68e9d6cedaeec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0a6a2a5c272abea3ed00c88d78309fceb4262cae9cbe23d9235c11ed1b0141,PodSandboxId:b23bc9bb555712caaa5127c826ada81004d60a3a5c59b45d7324d1f803cade07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,Crea
tedAt:1765238219472660216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4cb138163f227c5561a44f5df4f37c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d43cee3b7286b9b69ae9308c3189b402eee6908dfa733861f09a7f2ea806eb44,PodSandboxId:105e3027cd14ece16583ea148f97ea5a2c36e4c09e50dc88046fc1c008b8f1c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765238219465902567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9efb5bb866e087334abab732689f2da0,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ad694ccb2232dba4c2037616572618cdfe6887c3baaaf9fe1f736776f7f2cb,PodSandboxId:cbcd6ef81b30c7490646279188f5a7cd8f50218212688d512526b20ce47aa16f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Imag
eSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765238219415913724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-687309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afed3ea57b9bd84657559630f9b1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bac7652f-8173-4e27-bb84-c80b07c6777b name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	d8de1d308bca6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 seconds ago       Running             coredns                   1                   6db22604cc3d0       coredns-66bc5c9577-8ccz5                      kube-system
	445eea15e6c59       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   11e55f512adac       storage-provisioner                           kube-system
	3e9c85cc31ada       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   660428a214a8c       kube-proxy-cdgq4                              kube-system
	3c22ff5940f19       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      1                   5a8e3fdb685dd       etcd-test-preload-687309                      kube-system
	4b0a6a2a5c272       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            1                   b23bc9bb55571       kube-apiserver-test-preload-687309            kube-system
	d43cee3b7286b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   1                   105e3027cd14e       kube-controller-manager-test-preload-687309   kube-system
	68ad694ccb223       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            1                   cbcd6ef81b30c       kube-scheduler-test-preload-687309            kube-system
	
	
	==> coredns [d8de1d308bca6e87a9e3eb281e2deef54e0ba0c23c394d9f030eb08e370870e0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46193 - 25344 "HINFO IN 7831937134548952388.2711182261696493070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031112838s
	
	
	==> describe nodes <==
	Name:               test-preload-687309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-687309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2846307350d09469fc6b6b47dd0c4837fa740d9c
	                    minikube.k8s.io/name=test-preload-687309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T23_55_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 23:55:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-687309
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 23:57:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 23:57:12 +0000   Mon, 08 Dec 2025 23:55:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 23:57:12 +0000   Mon, 08 Dec 2025 23:55:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 23:57:12 +0000   Mon, 08 Dec 2025 23:55:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 23:57:12 +0000   Mon, 08 Dec 2025 23:57:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    test-preload-687309
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 900d84ebe0574d7d9312719289cd12a5
	  System UUID:                900d84eb-e057-4d7d-9312-719289cd12a5
	  Boot ID:                    d5dd0eeb-8611-4b47-8f66-2d1192142488
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8ccz5                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     90s
	  kube-system                 etcd-test-preload-687309                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         96s
	  kube-system                 kube-apiserver-test-preload-687309             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-test-preload-687309    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-cdgq4                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-test-preload-687309             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 89s                  kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node test-preload-687309 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node test-preload-687309 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node test-preload-687309 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     96s                  kubelet          Node test-preload-687309 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  96s                  kubelet          Node test-preload-687309 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s                  kubelet          Node test-preload-687309 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 96s                  kubelet          Starting kubelet.
	  Normal   NodeReady                95s                  kubelet          Node test-preload-687309 status is now: NodeReady
	  Normal   RegisteredNode           91s                  node-controller  Node test-preload-687309 event: Registered Node test-preload-687309 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-687309 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-687309 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-687309 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-687309 has been rebooted, boot id: d5dd0eeb-8611-4b47-8f66-2d1192142488
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-687309 event: Registered Node test-preload-687309 in Controller
	
	
	==> dmesg <==
	[Dec 8 23:56] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000032] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003077] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.941086] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.115915] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.096179] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 8 23:57] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.000018] kauditd_printk_skb: 128 callbacks suppressed
	[  +7.603381] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [3c22ff5940f1949c825c04ef1f7ed3f4eb97b82bc56ba5502fe32bbfa04eceef] <==
	{"level":"warn","ts":"2025-12-08T23:57:01.112152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.123446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.130400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.143723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.156453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.166500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.178124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.192571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.204627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.212533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.219916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.234082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.243909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.257030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.263101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.272482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.279035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.292730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.301578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.320676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.324564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.342442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.351276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.371539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T23:57:01.467353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38912","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:57:19 up 0 min,  0 users,  load average: 1.01, 0.26, 0.09
	Linux test-preload-687309 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4b0a6a2a5c272abea3ed00c88d78309fceb4262cae9cbe23d9235c11ed1b0141] <==
	I1208 23:57:02.303156       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1208 23:57:02.303168       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1208 23:57:02.303256       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1208 23:57:02.303339       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 23:57:02.303537       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1208 23:57:02.307690       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1208 23:57:02.308612       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1208 23:57:02.308642       1 aggregator.go:171] initial CRD sync complete...
	I1208 23:57:02.308648       1 autoregister_controller.go:144] Starting autoregister controller
	I1208 23:57:02.308652       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 23:57:02.308656       1 cache.go:39] Caches are synced for autoregister controller
	I1208 23:57:02.308658       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 23:57:02.309867       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 23:57:02.309915       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 23:57:02.311131       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1208 23:57:02.315874       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 23:57:02.720202       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 23:57:03.110126       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 23:57:03.924081       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 23:57:03.962500       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1208 23:57:03.992038       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 23:57:03.998512       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 23:57:05.930015       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1208 23:57:05.979606       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 23:57:06.030186       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d43cee3b7286b9b69ae9308c3189b402eee6908dfa733861f09a7f2ea806eb44] <==
	I1208 23:57:05.576815       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1208 23:57:05.577001       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 23:57:05.577048       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1208 23:57:05.577493       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1208 23:57:05.578078       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1208 23:57:05.578592       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1208 23:57:05.578670       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 23:57:05.578973       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1208 23:57:05.580416       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1208 23:57:05.580475       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 23:57:05.580513       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1208 23:57:05.580521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1208 23:57:05.585742       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 23:57:05.589076       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 23:57:05.592335       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 23:57:05.595565       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1208 23:57:05.605899       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1208 23:57:05.607142       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 23:57:05.608238       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1208 23:57:05.614509       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1208 23:57:05.632514       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1208 23:57:05.647906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 23:57:05.647997       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1208 23:57:05.648008       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 23:57:15.531242       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3e9c85cc31adafd72a989188ca489404b5520f3188b8f5681db2cd7a23cf1bec] <==
	I1208 23:57:03.429152       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 23:57:03.529899       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 23:57:03.529966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.114"]
	E1208 23:57:03.530055       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 23:57:03.563635       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1208 23:57:03.563695       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1208 23:57:03.563718       1 server_linux.go:132] "Using iptables Proxier"
	I1208 23:57:03.571867       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 23:57:03.572143       1 server.go:527] "Version info" version="v1.34.2"
	I1208 23:57:03.572173       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 23:57:03.576513       1 config.go:200] "Starting service config controller"
	I1208 23:57:03.576539       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 23:57:03.576584       1 config.go:309] "Starting node config controller"
	I1208 23:57:03.576604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 23:57:03.576609       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 23:57:03.577041       1 config.go:106] "Starting endpoint slice config controller"
	I1208 23:57:03.577080       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 23:57:03.577096       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 23:57:03.577108       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 23:57:03.677012       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 23:57:03.677222       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 23:57:03.677236       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [68ad694ccb2232dba4c2037616572618cdfe6887c3baaaf9fe1f736776f7f2cb] <==
	I1208 23:57:01.254637       1 serving.go:386] Generated self-signed cert in-memory
	W1208 23:57:02.122784       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1208 23:57:02.122824       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 23:57:02.122833       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1208 23:57:02.122839       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1208 23:57:02.226887       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 23:57:02.226906       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 23:57:02.233441       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 23:57:02.233482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 23:57:02.233886       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 23:57:02.234189       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 23:57:02.333680       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: I1208 23:57:02.467568    1188 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-687309"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: E1208 23:57:02.477948    1188 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-687309\" already exists" pod="kube-system/etcd-test-preload-687309"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: I1208 23:57:02.609118    1188 apiserver.go:52] "Watching apiserver"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: E1208 23:57:02.612605    1188 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-8ccz5" podUID="2ec1da29-2313-4d2d-ba93-3a9b04a9edf8"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: I1208 23:57:02.641092    1188 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: E1208 23:57:02.684991    1188 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: I1208 23:57:02.715016    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f2ce4fed-cb49-42c2-b38a-bee7629bb151-tmp\") pod \"storage-provisioner\" (UID: \"f2ce4fed-cb49-42c2-b38a-bee7629bb151\") " pod="kube-system/storage-provisioner"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: I1208 23:57:02.715093    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/432af2a8-2f4e-4cf2-ba0c-2c728c036c9b-lib-modules\") pod \"kube-proxy-cdgq4\" (UID: \"432af2a8-2f4e-4cf2-ba0c-2c728c036c9b\") " pod="kube-system/kube-proxy-cdgq4"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: I1208 23:57:02.715146    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/432af2a8-2f4e-4cf2-ba0c-2c728c036c9b-xtables-lock\") pod \"kube-proxy-cdgq4\" (UID: \"432af2a8-2f4e-4cf2-ba0c-2c728c036c9b\") " pod="kube-system/kube-proxy-cdgq4"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: E1208 23:57:02.716474    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: E1208 23:57:02.716584    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ec1da29-2313-4d2d-ba93-3a9b04a9edf8-config-volume podName:2ec1da29-2313-4d2d-ba93-3a9b04a9edf8 nodeName:}" failed. No retries permitted until 2025-12-08 23:57:03.216566822 +0000 UTC m=+5.696678298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2ec1da29-2313-4d2d-ba93-3a9b04a9edf8-config-volume") pod "coredns-66bc5c9577-8ccz5" (UID: "2ec1da29-2313-4d2d-ba93-3a9b04a9edf8") : object "kube-system"/"coredns" not registered
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: I1208 23:57:02.754573    1188 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-687309"
	Dec 08 23:57:02 test-preload-687309 kubelet[1188]: E1208 23:57:02.762877    1188 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-687309\" already exists" pod="kube-system/etcd-test-preload-687309"
	Dec 08 23:57:03 test-preload-687309 kubelet[1188]: E1208 23:57:03.218868    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 08 23:57:03 test-preload-687309 kubelet[1188]: E1208 23:57:03.219546    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ec1da29-2313-4d2d-ba93-3a9b04a9edf8-config-volume podName:2ec1da29-2313-4d2d-ba93-3a9b04a9edf8 nodeName:}" failed. No retries permitted until 2025-12-08 23:57:04.219518561 +0000 UTC m=+6.699630035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2ec1da29-2313-4d2d-ba93-3a9b04a9edf8-config-volume") pod "coredns-66bc5c9577-8ccz5" (UID: "2ec1da29-2313-4d2d-ba93-3a9b04a9edf8") : object "kube-system"/"coredns" not registered
	Dec 08 23:57:04 test-preload-687309 kubelet[1188]: E1208 23:57:04.225456    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 08 23:57:04 test-preload-687309 kubelet[1188]: E1208 23:57:04.225843    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ec1da29-2313-4d2d-ba93-3a9b04a9edf8-config-volume podName:2ec1da29-2313-4d2d-ba93-3a9b04a9edf8 nodeName:}" failed. No retries permitted until 2025-12-08 23:57:06.225823949 +0000 UTC m=+8.705935428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2ec1da29-2313-4d2d-ba93-3a9b04a9edf8-config-volume") pod "coredns-66bc5c9577-8ccz5" (UID: "2ec1da29-2313-4d2d-ba93-3a9b04a9edf8") : object "kube-system"/"coredns" not registered
	Dec 08 23:57:04 test-preload-687309 kubelet[1188]: E1208 23:57:04.686095    1188 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-8ccz5" podUID="2ec1da29-2313-4d2d-ba93-3a9b04a9edf8"
	Dec 08 23:57:06 test-preload-687309 kubelet[1188]: E1208 23:57:06.239103    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 08 23:57:06 test-preload-687309 kubelet[1188]: E1208 23:57:06.239203    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ec1da29-2313-4d2d-ba93-3a9b04a9edf8-config-volume podName:2ec1da29-2313-4d2d-ba93-3a9b04a9edf8 nodeName:}" failed. No retries permitted until 2025-12-08 23:57:10.239189923 +0000 UTC m=+12.719301396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2ec1da29-2313-4d2d-ba93-3a9b04a9edf8-config-volume") pod "coredns-66bc5c9577-8ccz5" (UID: "2ec1da29-2313-4d2d-ba93-3a9b04a9edf8") : object "kube-system"/"coredns" not registered
	Dec 08 23:57:06 test-preload-687309 kubelet[1188]: E1208 23:57:06.686841    1188 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-8ccz5" podUID="2ec1da29-2313-4d2d-ba93-3a9b04a9edf8"
	Dec 08 23:57:07 test-preload-687309 kubelet[1188]: E1208 23:57:07.687700    1188 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765238227686577666 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 08 23:57:07 test-preload-687309 kubelet[1188]: E1208 23:57:07.687720    1188 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765238227686577666 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 08 23:57:17 test-preload-687309 kubelet[1188]: E1208 23:57:17.688972    1188 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765238237688701046 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 08 23:57:17 test-preload-687309 kubelet[1188]: E1208 23:57:17.688991    1188 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765238237688701046 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [445eea15e6c59f8e6ae7a676dbae71d518e5bc81d9cac490273121a78053792b] <==
	I1208 23:57:03.241874       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-687309 -n test-preload-687309
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-687309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-687309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-687309
--- FAIL: TestPreload (142.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (62.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-165880 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-165880 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.048419455s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-165880] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-165880" primary control-plane node in "pause-165880" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-165880" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 00:04:55.497792  782653 out.go:360] Setting OutFile to fd 1 ...
	I1209 00:04:55.497905  782653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 00:04:55.497917  782653 out.go:374] Setting ErrFile to fd 2...
	I1209 00:04:55.497923  782653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 00:04:55.498170  782653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1209 00:04:55.498699  782653 out.go:368] Setting JSON to false
	I1209 00:04:55.499711  782653 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10035,"bootTime":1765228660,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 00:04:55.499775  782653 start.go:143] virtualization: kvm guest
	I1209 00:04:55.501320  782653 out.go:179] * [pause-165880] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 00:04:55.502424  782653 out.go:179]   - MINIKUBE_LOCATION=22075
	I1209 00:04:55.502444  782653 notify.go:221] Checking for updates...
	I1209 00:04:55.504965  782653 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 00:04:55.505986  782653 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1209 00:04:55.506972  782653 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1209 00:04:55.508046  782653 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 00:04:55.509063  782653 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 00:04:55.510716  782653 config.go:182] Loaded profile config "pause-165880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:04:55.511456  782653 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 00:04:55.547113  782653 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 00:04:55.548094  782653 start.go:309] selected driver: kvm2
	I1209 00:04:55.548112  782653 start.go:927] validating driver "kvm2" against &{Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:04:55.548282  782653 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 00:04:55.549299  782653 cni.go:84] Creating CNI manager for ""
	I1209 00:04:55.549406  782653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 00:04:55.549484  782653 start.go:353] cluster config:
	{Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:04:55.549643  782653 iso.go:125] acquiring lock: {Name:mk3f3df5ef11b93dcc62a5800b46f2775cc6cbb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 00:04:55.551246  782653 out.go:179] * Starting "pause-165880" primary control-plane node in "pause-165880" cluster
	I1209 00:04:55.552247  782653 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:04:55.552306  782653 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 00:04:55.552318  782653 cache.go:65] Caching tarball of preloaded images
	I1209 00:04:55.552424  782653 preload.go:238] Found /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 00:04:55.552435  782653 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 00:04:55.552555  782653 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/config.json ...
	I1209 00:04:55.552768  782653 start.go:360] acquireMachinesLock for pause-165880: {Name:mk9f5a36f0f03c819637fd3ede2b02dca808c533 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 00:05:19.665568  782653 start.go:364] duration metric: took 24.112756677s to acquireMachinesLock for "pause-165880"
	I1209 00:05:19.665613  782653 start.go:96] Skipping create...Using existing machine configuration
	I1209 00:05:19.665627  782653 fix.go:54] fixHost starting: 
	I1209 00:05:19.668343  782653 fix.go:112] recreateIfNeeded on pause-165880: state=Running err=<nil>
	W1209 00:05:19.668411  782653 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 00:05:19.670388  782653 out.go:252] * Updating the running kvm2 "pause-165880" VM ...
	I1209 00:05:19.670427  782653 machine.go:94] provisionDockerMachine start ...
	I1209 00:05:19.674341  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.674886  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.674928  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.675273  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.675624  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.675652  782653 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 00:05:19.791731  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-165880
	
	I1209 00:05:19.791773  782653 buildroot.go:166] provisioning hostname "pause-165880"
	I1209 00:05:19.795624  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.796205  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.796234  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.796514  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.796747  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.796759  782653 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-165880 && echo "pause-165880" | sudo tee /etc/hostname
	I1209 00:05:19.936746  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-165880
	
	I1209 00:05:19.940045  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.940462  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.940493  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.940654  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.940846  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.940860  782653 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-165880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-165880/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-165880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 00:05:20.060582  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:05:20.060614  782653 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1209 00:05:20.060650  782653 buildroot.go:174] setting up certificates
	I1209 00:05:20.060664  782653 provision.go:84] configureAuth start
	I1209 00:05:20.065295  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.066045  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.066090  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.069288  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.069780  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.069809  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.070050  782653 provision.go:143] copyHostCerts
	I1209 00:05:20.070117  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem, removing ...
	I1209 00:05:20.070131  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem
	I1209 00:05:20.070204  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1209 00:05:20.070358  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem, removing ...
	I1209 00:05:20.070393  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem
	I1209 00:05:20.070432  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1209 00:05:20.070548  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem, removing ...
	I1209 00:05:20.070561  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem
	I1209 00:05:20.070599  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1209 00:05:20.070687  782653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.pause-165880 san=[127.0.0.1 192.168.83.217 localhost minikube pause-165880]
	I1209 00:05:20.171275  782653 provision.go:177] copyRemoteCerts
	I1209 00:05:20.171338  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 00:05:20.174350  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.174927  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.174953  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.175169  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:20.271573  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 00:05:20.314206  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1209 00:05:20.346866  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 00:05:20.384460  782653 provision.go:87] duration metric: took 323.774611ms to configureAuth
	I1209 00:05:20.384496  782653 buildroot.go:189] setting minikube options for container-runtime
	I1209 00:05:20.384810  782653 config.go:182] Loaded profile config "pause-165880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:05:20.387997  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.388483  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.388520  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.388698  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:20.388903  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:20.388917  782653 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 00:05:26.003810  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 00:05:26.003841  782653 machine.go:97] duration metric: took 6.33340561s to provisionDockerMachine
	I1209 00:05:26.003854  782653 start.go:293] postStartSetup for "pause-165880" (driver="kvm2")
	I1209 00:05:26.003864  782653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 00:05:26.003941  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 00:05:26.007221  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.007720  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.007781  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.007981  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.100638  782653 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 00:05:26.105932  782653 info.go:137] Remote host: Buildroot 2025.02
	I1209 00:05:26.105968  782653 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1209 00:05:26.106049  782653 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1209 00:05:26.106130  782653 filesync.go:149] local asset: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem -> 7489302.pem in /etc/ssl/certs
	I1209 00:05:26.106227  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 00:05:26.123738  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:26.167380  782653 start.go:296] duration metric: took 163.489508ms for postStartSetup
	I1209 00:05:26.167445  782653 fix.go:56] duration metric: took 6.501816173s for fixHost
	I1209 00:05:26.171923  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.172486  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.172518  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.172775  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:26.173094  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:26.173118  782653 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 00:05:26.293758  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765238726.290651991
	
	I1209 00:05:26.293787  782653 fix.go:216] guest clock: 1765238726.290651991
	I1209 00:05:26.293797  782653 fix.go:229] Guest: 2025-12-09 00:05:26.290651991 +0000 UTC Remote: 2025-12-09 00:05:26.167452687 +0000 UTC m=+30.731624268 (delta=123.199304ms)
	I1209 00:05:26.293823  782653 fix.go:200] guest clock delta is within tolerance: 123.199304ms
	I1209 00:05:26.293829  782653 start.go:83] releasing machines lock for "pause-165880", held for 6.628237017s
	I1209 00:05:26.297200  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.297750  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.297786  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.298435  782653 ssh_runner.go:195] Run: cat /version.json
	I1209 00:05:26.298534  782653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 00:05:26.302194  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302574  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302770  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.302815  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302991  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.303012  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.303153  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.303414  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.388338  782653 ssh_runner.go:195] Run: systemctl --version
	I1209 00:05:26.411503  782653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 00:05:26.564483  782653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 00:05:26.577338  782653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 00:05:26.577435  782653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 00:05:26.589629  782653 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 00:05:26.589669  782653 start.go:496] detecting cgroup driver to use...
	I1209 00:05:26.589771  782653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 00:05:26.614167  782653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 00:05:26.634398  782653 docker.go:218] disabling cri-docker service (if available) ...
	I1209 00:05:26.634551  782653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 00:05:26.655828  782653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 00:05:26.677740  782653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 00:05:26.879759  782653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 00:05:27.075050  782653 docker.go:234] disabling docker service ...
	I1209 00:05:27.075148  782653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 00:05:27.108544  782653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 00:05:27.128174  782653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 00:05:27.333496  782653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 00:05:27.527709  782653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 00:05:27.547600  782653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 00:05:27.573078  782653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 00:05:27.573176  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.591439  782653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 00:05:27.591536  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.610214  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.624565  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.637537  782653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 00:05:27.652581  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.667490  782653 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.683625  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.699870  782653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 00:05:27.713298  782653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 00:05:27.726610  782653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:27.922280  782653 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 00:05:28.154862  782653 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 00:05:28.154956  782653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 00:05:28.160673  782653 start.go:564] Will wait 60s for crictl version
	I1209 00:05:28.160757  782653 ssh_runner.go:195] Run: which crictl
	I1209 00:05:28.165831  782653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 00:05:28.203701  782653 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 00:05:28.203843  782653 ssh_runner.go:195] Run: crio --version
	I1209 00:05:28.238662  782653 ssh_runner.go:195] Run: crio --version
	I1209 00:05:28.282458  782653 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1209 00:05:28.287928  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:28.288417  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:28.288452  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:28.288697  782653 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1209 00:05:28.295003  782653 kubeadm.go:884] updating cluster {Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 00:05:28.295164  782653 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:05:28.295231  782653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:28.342780  782653 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 00:05:28.342817  782653 crio.go:433] Images already preloaded, skipping extraction
	I1209 00:05:28.342903  782653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:28.378433  782653 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 00:05:28.378469  782653 cache_images.go:86] Images are preloaded, skipping loading
	I1209 00:05:28.378482  782653 kubeadm.go:935] updating node { 192.168.83.217 8443 v1.34.2 crio true true} ...
	I1209 00:05:28.378663  782653 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-165880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 00:05:28.378778  782653 ssh_runner.go:195] Run: crio config
	I1209 00:05:28.437108  782653 cni.go:84] Creating CNI manager for ""
	I1209 00:05:28.437142  782653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 00:05:28.437168  782653 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 00:05:28.437201  782653 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.217 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-165880 NodeName:pause-165880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 00:05:28.437474  782653 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-165880"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 00:05:28.437593  782653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 00:05:28.453634  782653 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 00:05:28.453724  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 00:05:28.471239  782653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 00:05:28.493830  782653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 00:05:28.520139  782653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1209 00:05:28.549492  782653 ssh_runner.go:195] Run: grep 192.168.83.217	control-plane.minikube.internal$ /etc/hosts
	I1209 00:05:28.554579  782653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:28.753857  782653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 00:05:28.773412  782653 certs.go:69] Setting up /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880 for IP: 192.168.83.217
	I1209 00:05:28.773448  782653 certs.go:195] generating shared ca certs ...
	I1209 00:05:28.773475  782653 certs.go:227] acquiring lock for ca certs: {Name:mk069bbba4d83d251409b18022ca36eb869d942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:28.773724  782653 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key
	I1209 00:05:28.773877  782653 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key
	I1209 00:05:28.773921  782653 certs.go:257] generating profile certs ...
	I1209 00:05:28.774082  782653 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/client.key
	I1209 00:05:28.774272  782653 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.key.66e6a13d
	I1209 00:05:28.774378  782653 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.key
	I1209 00:05:28.774576  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem (1338 bytes)
	W1209 00:05:28.774636  782653 certs.go:480] ignoring /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930_empty.pem, impossibly tiny 0 bytes
	I1209 00:05:28.774654  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 00:05:28.774697  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem (1082 bytes)
	I1209 00:05:28.774736  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem (1123 bytes)
	I1209 00:05:28.774784  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem (1675 bytes)
	I1209 00:05:28.774872  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:28.776246  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 00:05:28.810505  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 00:05:28.842442  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 00:05:28.877226  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 00:05:28.908400  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 00:05:28.945631  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 00:05:28.979242  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 00:05:29.010043  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 00:05:29.051848  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem --> /usr/share/ca-certificates/748930.pem (1338 bytes)
	I1209 00:05:29.095619  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /usr/share/ca-certificates/7489302.pem (1708 bytes)
	I1209 00:05:29.133913  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 00:05:29.167271  782653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 00:05:29.189858  782653 ssh_runner.go:195] Run: openssl version
	I1209 00:05:29.197275  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.213666  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7489302.pem /etc/ssl/certs/7489302.pem
	I1209 00:05:29.226518  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.233152  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 23:15 /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.233284  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.240720  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 00:05:29.257715  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.273541  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 00:05:29.287565  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.293576  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.293641  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.301507  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 00:05:29.317647  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.340472  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/748930.pem /etc/ssl/certs/748930.pem
	I1209 00:05:29.392662  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.417247  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 23:15 /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.417320  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.436429  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 00:05:29.462985  782653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 00:05:29.472337  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 00:05:29.490957  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 00:05:29.504936  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 00:05:29.522957  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 00:05:29.538865  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 00:05:29.563145  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 00:05:29.581605  782653 kubeadm.go:401] StartCluster: {Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:05:29.581729  782653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 00:05:29.581823  782653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 00:05:29.688653  782653 cri.go:89] found id: "ed65d6af397731b1f5197ca1ee72a10abb2e0c22f62636e7bf2f7991071908cd"
	I1209 00:05:29.688692  782653 cri.go:89] found id: "9d522f4cf939d18e1de8df559158d043f98ae2ae01d8e14fe19b99d12c966f9f"
	I1209 00:05:29.688699  782653 cri.go:89] found id: "1797d0193cbe8ccd00b871fd19c9db605c89849a37a5010a5b0afa9022e4bf5f"
	I1209 00:05:29.688704  782653 cri.go:89] found id: "f00f2f5cffabec2b84bd23963ef53056ad87c8c1144d913e8afc9138caa5aa55"
	I1209 00:05:29.688709  782653 cri.go:89] found id: "db99b4ce7c7601a2d364718d8dd4fd7d04ea390b975cdec540ad671bbacaff1a"
	I1209 00:05:29.688728  782653 cri.go:89] found id: "3b25232d3c3957b7529e17f93abc0620cdd1d4bfa51469cdb8094edfce1aa828"
	I1209 00:05:29.688734  782653 cri.go:89] found id: ""
	I1209 00:05:29.688797  782653 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-165880 -n pause-165880
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-165880 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-165880 logs -n 25: (1.50454779s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-769581 │ jenkins │ v1.37.0 │ 09 Dec 25 00:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio               │ kubernetes-upgrade-769581 │ jenkins │ v1.37.0 │ 09 Dec 25 00:02 UTC │ 09 Dec 25 00:03 UTC │
	│ start   │ -p pause-165880 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-165880              │ jenkins │ v1.37.0 │ 09 Dec 25 00:03 UTC │ 09 Dec 25 00:04 UTC │
	│ stop    │ stopped-upgrade-316150 stop                                                                                                                                 │ stopped-upgrade-316150    │ jenkins │ v1.35.0 │ 09 Dec 25 00:03 UTC │ 09 Dec 25 00:03 UTC │
	│ start   │ -p stopped-upgrade-316150 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-316150    │ jenkins │ v1.37.0 │ 09 Dec 25 00:03 UTC │ 09 Dec 25 00:04 UTC │
	│ delete  │ -p kubernetes-upgrade-769581                                                                                                                                │ kubernetes-upgrade-769581 │ jenkins │ v1.37.0 │ 09 Dec 25 00:03 UTC │ 09 Dec 25 00:03 UTC │
	│ start   │ -p auto-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:03 UTC │ 09 Dec 25 00:05 UTC │
	│ start   │ -p cert-expiration-134582 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-134582    │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:04 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-316150 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-316150    │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │                     │
	│ delete  │ -p stopped-upgrade-316150                                                                                                                                   │ stopped-upgrade-316150    │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:04 UTC │
	│ start   │ -p kindnet-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:05 UTC │
	│ delete  │ -p cert-expiration-134582                                                                                                                                   │ cert-expiration-134582    │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:04 UTC │
	│ start   │ -p calico-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio                        │ calico-474683             │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │                     │
	│ start   │ -p pause-165880 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-165880              │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 pgrep -a kubelet                                                                                                                             │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p kindnet-474683 pgrep -a kubelet                                                                                                                          │ kindnet-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo cat /etc/nsswitch.conf                                                                                                                  │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo cat /etc/hosts                                                                                                                          │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo cat /etc/resolv.conf                                                                                                                    │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo crictl pods                                                                                                                             │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo crictl ps --all                                                                                                                         │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                  │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo ip a s                                                                                                                                  │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo ip r s                                                                                                                                  │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo iptables-save                                                                                                                           │ auto-474683               │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 00:04:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 00:04:55.497792  782653 out.go:360] Setting OutFile to fd 1 ...
	I1209 00:04:55.497905  782653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 00:04:55.497917  782653 out.go:374] Setting ErrFile to fd 2...
	I1209 00:04:55.497923  782653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 00:04:55.498170  782653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1209 00:04:55.498699  782653 out.go:368] Setting JSON to false
	I1209 00:04:55.499711  782653 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10035,"bootTime":1765228660,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 00:04:55.499775  782653 start.go:143] virtualization: kvm guest
	I1209 00:04:55.501320  782653 out.go:179] * [pause-165880] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 00:04:55.502424  782653 out.go:179]   - MINIKUBE_LOCATION=22075
	I1209 00:04:55.502444  782653 notify.go:221] Checking for updates...
	I1209 00:04:55.504965  782653 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 00:04:55.505986  782653 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1209 00:04:55.506972  782653 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1209 00:04:55.508046  782653 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 00:04:55.509063  782653 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 00:04:55.510716  782653 config.go:182] Loaded profile config "pause-165880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:04:55.511456  782653 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 00:04:55.547113  782653 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 00:04:55.548094  782653 start.go:309] selected driver: kvm2
	I1209 00:04:55.548112  782653 start.go:927] validating driver "kvm2" against &{Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:04:55.548282  782653 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 00:04:55.549299  782653 cni.go:84] Creating CNI manager for ""
	I1209 00:04:55.549406  782653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 00:04:55.549484  782653 start.go:353] cluster config:
	{Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:04:55.549643  782653 iso.go:125] acquiring lock: {Name:mk3f3df5ef11b93dcc62a5800b46f2775cc6cbb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 00:04:55.551246  782653 out.go:179] * Starting "pause-165880" primary control-plane node in "pause-165880" cluster
	I1209 00:04:54.124280  782623 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:04:54.124326  782623 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 00:04:54.124346  782623 cache.go:65] Caching tarball of preloaded images
	I1209 00:04:54.124486  782623 preload.go:238] Found /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 00:04:54.124507  782623 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 00:04:54.124660  782623 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/config.json ...
	I1209 00:04:54.124695  782623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/config.json: {Name:mka35ef7265cdc8907f55aafe10eb574e8505e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:04:54.124911  782623 start.go:360] acquireMachinesLock for calico-474683: {Name:mk9f5a36f0f03c819637fd3ede2b02dca808c533 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 00:04:58.643259  782623 start.go:364] duration metric: took 4.518306838s to acquireMachinesLock for "calico-474683"
	I1209 00:04:58.643347  782623 start.go:93] Provisioning new machine with config: &{Name:calico-474683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:calico-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 00:04:58.643490  782623 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 00:04:54.695305  781906 addons.go:530] duration metric: took 1.113476368s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 00:04:54.698055  781906 system_pods.go:59] 8 kube-system pods found
	I1209 00:04:54.698121  781906 system_pods.go:61] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.698144  781906 system_pods.go:61] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.698162  781906 system_pods.go:61] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 00:04:54.698175  781906 system_pods.go:61] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:54.698244  781906 system_pods.go:61] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:54.698265  781906 system_pods.go:61] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:54.698292  781906 system_pods.go:61] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:54.698308  781906 system_pods.go:61] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending
	I1209 00:04:54.698323  781906 system_pods.go:74] duration metric: took 5.933243ms to wait for pod list to return data ...
	I1209 00:04:54.698339  781906 default_sa.go:34] waiting for default service account to be created ...
	I1209 00:04:54.701422  781906 default_sa.go:45] found service account: "default"
	I1209 00:04:54.701445  781906 default_sa.go:55] duration metric: took 3.096713ms for default service account to be created ...
	I1209 00:04:54.701457  781906 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 00:04:54.707479  781906 system_pods.go:86] 8 kube-system pods found
	I1209 00:04:54.707516  781906 system_pods.go:89] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.707526  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.707536  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 00:04:54.707549  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:54.707559  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:54.707571  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:54.707582  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:54.707588  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending
	I1209 00:04:54.707629  781906 retry.go:31] will retry after 214.747702ms: missing components: kube-dns, kube-proxy
	I1209 00:04:54.737737  781906 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-474683" context rescaled to 1 replicas
	I1209 00:04:54.929957  781906 system_pods.go:86] 8 kube-system pods found
	I1209 00:04:54.930008  781906 system_pods.go:89] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.930017  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.930035  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running
	I1209 00:04:54.930043  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:54.930052  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:54.930057  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:54.930062  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:54.930068  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 00:04:54.930087  781906 retry.go:31] will retry after 268.492274ms: missing components: kube-dns, kube-proxy
	I1209 00:04:55.208622  781906 system_pods.go:86] 8 kube-system pods found
	I1209 00:04:55.208659  781906 system_pods.go:89] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:55.208667  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:55.208673  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running
	I1209 00:04:55.208679  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:55.208685  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:55.208692  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:55.208699  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:55.208706  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 00:04:55.208731  781906 retry.go:31] will retry after 408.177628ms: missing components: kube-dns, kube-proxy
	I1209 00:04:55.623746  781906 system_pods.go:86] 8 kube-system pods found
	I1209 00:04:55.623787  781906 system_pods.go:89] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:55.623799  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:55.623806  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running
	I1209 00:04:55.623813  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:55.623824  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:55.623831  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:55.623841  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:55.623850  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 00:04:55.623877  781906 retry.go:31] will retry after 535.312228ms: missing components: kube-dns, kube-proxy
	I1209 00:04:56.163880  781906 system_pods.go:86] 7 kube-system pods found
	I1209 00:04:56.163912  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:56.163918  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running
	I1209 00:04:56.163925  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:56.163928  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running
	I1209 00:04:56.163933  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Running
	I1209 00:04:56.163937  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:56.163940  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Running
	I1209 00:04:56.163949  781906 system_pods.go:126] duration metric: took 1.462485333s to wait for k8s-apps to be running ...
	I1209 00:04:56.163956  781906 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 00:04:56.164004  781906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 00:04:56.181297  781906 system_svc.go:56] duration metric: took 17.33099ms WaitForService to wait for kubelet
	I1209 00:04:56.181327  781906 kubeadm.go:587] duration metric: took 2.599558559s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 00:04:56.181344  781906 node_conditions.go:102] verifying NodePressure condition ...
	I1209 00:04:56.185284  781906 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 00:04:56.185311  781906 node_conditions.go:123] node cpu capacity is 2
	I1209 00:04:56.185326  781906 node_conditions.go:105] duration metric: took 3.976776ms to run NodePressure ...
	I1209 00:04:56.185338  781906 start.go:242] waiting for startup goroutines ...
	I1209 00:04:56.185346  781906 start.go:247] waiting for cluster config update ...
	I1209 00:04:56.185381  781906 start.go:256] writing updated cluster config ...
	I1209 00:04:56.185704  781906 ssh_runner.go:195] Run: rm -f paused
	I1209 00:04:56.190630  781906 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 00:04:56.193744  781906 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x9bsg" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 00:04:58.200682  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:04:58.645576  782623 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 00:04:58.645814  782623 start.go:159] libmachine.API.Create for "calico-474683" (driver="kvm2")
	I1209 00:04:58.645863  782623 client.go:173] LocalClient.Create starting
	I1209 00:04:58.645978  782623 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem
	I1209 00:04:58.646027  782623 main.go:143] libmachine: Decoding PEM data...
	I1209 00:04:58.646053  782623 main.go:143] libmachine: Parsing certificate...
	I1209 00:04:58.646145  782623 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem
	I1209 00:04:58.646175  782623 main.go:143] libmachine: Decoding PEM data...
	I1209 00:04:58.646192  782623 main.go:143] libmachine: Parsing certificate...
	I1209 00:04:58.646598  782623 main.go:143] libmachine: creating domain...
	I1209 00:04:58.646612  782623 main.go:143] libmachine: creating network...
	I1209 00:04:58.648301  782623 main.go:143] libmachine: found existing default network
	I1209 00:04:58.648588  782623 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 00:04:58.649499  782623 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:93:61:94} reservation:<nil>}
	I1209 00:04:58.650887  782623 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cb1090}
	I1209 00:04:58.650993  782623 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-calico-474683</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 00:04:58.657136  782623 main.go:143] libmachine: creating private network mk-calico-474683 192.168.50.0/24...
	I1209 00:04:58.734111  782623 main.go:143] libmachine: private network mk-calico-474683 192.168.50.0/24 created
	I1209 00:04:58.734438  782623 main.go:143] libmachine: <network>
	  <name>mk-calico-474683</name>
	  <uuid>e4268c5f-78c9-4dbc-9747-b885c11a1ce2</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:d0:ae:9e'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 00:04:58.734477  782623 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683 ...
	I1209 00:04:58.734507  782623 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22075-744871/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1209 00:04:58.734525  782623 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22075-744871/.minikube
	I1209 00:04:58.734599  782623 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22075-744871/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22075-744871/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1209 00:04:59.025400  782623 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa...
	I1209 00:04:57.279303  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.280089  782285 main.go:143] libmachine: domain kindnet-474683 has current primary IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.280108  782285 main.go:143] libmachine: found domain IP: 192.168.72.143
	I1209 00:04:57.280115  782285 main.go:143] libmachine: reserving static IP address...
	I1209 00:04:57.280589  782285 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-474683", mac: "52:54:00:30:63:fa", ip: "192.168.72.143"} in network mk-kindnet-474683
	I1209 00:04:57.512095  782285 main.go:143] libmachine: reserved static IP address 192.168.72.143 for domain kindnet-474683
	I1209 00:04:57.512125  782285 main.go:143] libmachine: waiting for SSH...
	I1209 00:04:57.512133  782285 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 00:04:57.515285  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.515860  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.515891  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.516112  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:57.516396  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:57.516408  782285 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 00:04:57.619077  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:04:57.619504  782285 main.go:143] libmachine: domain creation complete
	I1209 00:04:57.621002  782285 machine.go:94] provisionDockerMachine start ...
	I1209 00:04:57.623506  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.623894  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.623916  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.624167  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:57.624506  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:57.624533  782285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 00:04:57.729411  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 00:04:57.729450  782285 buildroot.go:166] provisioning hostname "kindnet-474683"
	I1209 00:04:57.732617  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.732973  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.732997  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.733153  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:57.733355  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:57.733389  782285 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-474683 && echo "kindnet-474683" | sudo tee /etc/hostname
	I1209 00:04:57.848586  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-474683
	
	I1209 00:04:57.851988  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.852466  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.852501  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.852683  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:57.852932  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:57.852948  782285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-474683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-474683/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-474683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 00:04:57.962152  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:04:57.962184  782285 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1209 00:04:57.962233  782285 buildroot.go:174] setting up certificates
	I1209 00:04:57.962259  782285 provision.go:84] configureAuth start
	I1209 00:04:57.965602  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.966086  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.966111  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.968472  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.968785  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.968810  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.968936  782285 provision.go:143] copyHostCerts
	I1209 00:04:57.968984  782285 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem, removing ...
	I1209 00:04:57.968994  782285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem
	I1209 00:04:57.969061  782285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1209 00:04:57.969166  782285 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem, removing ...
	I1209 00:04:57.969184  782285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem
	I1209 00:04:57.969212  782285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1209 00:04:57.969284  782285 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem, removing ...
	I1209 00:04:57.969291  782285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem
	I1209 00:04:57.969324  782285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1209 00:04:57.969440  782285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.kindnet-474683 san=[127.0.0.1 192.168.72.143 kindnet-474683 localhost minikube]
	I1209 00:04:57.989183  782285 provision.go:177] copyRemoteCerts
	I1209 00:04:57.989239  782285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 00:04:57.991512  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.991817  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.991858  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.992004  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:04:58.072806  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1209 00:04:58.105049  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 00:04:58.134582  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 00:04:58.163781  782285 provision.go:87] duration metric: took 201.506786ms to configureAuth
	I1209 00:04:58.163811  782285 buildroot.go:189] setting minikube options for container-runtime
	I1209 00:04:58.164001  782285 config.go:182] Loaded profile config "kindnet-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:04:58.167196  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.167606  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.167630  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.167871  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:58.168075  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:58.168092  782285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 00:04:58.402389  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 00:04:58.402428  782285 machine.go:97] duration metric: took 781.407625ms to provisionDockerMachine
	I1209 00:04:58.402441  782285 client.go:176] duration metric: took 21.341493812s to LocalClient.Create
	I1209 00:04:58.402460  782285 start.go:167] duration metric: took 21.341590734s to libmachine.API.Create "kindnet-474683"
	I1209 00:04:58.402468  782285 start.go:293] postStartSetup for "kindnet-474683" (driver="kvm2")
	I1209 00:04:58.402477  782285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 00:04:58.402559  782285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 00:04:58.405659  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.406131  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.406159  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.406358  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:04:58.488461  782285 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 00:04:58.493227  782285 info.go:137] Remote host: Buildroot 2025.02
	I1209 00:04:58.493252  782285 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1209 00:04:58.493348  782285 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1209 00:04:58.493468  782285 filesync.go:149] local asset: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem -> 7489302.pem in /etc/ssl/certs
	I1209 00:04:58.493614  782285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 00:04:58.506485  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:04:58.537700  782285 start.go:296] duration metric: took 135.217969ms for postStartSetup
	I1209 00:04:58.541086  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.541675  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.541716  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.542020  782285 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/config.json ...
	I1209 00:04:58.542199  782285 start.go:128] duration metric: took 21.486120725s to createHost
	I1209 00:04:58.544309  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.544748  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.544771  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.544913  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:58.545108  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:58.545118  782285 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 00:04:58.643113  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765238698.610252562
	
	I1209 00:04:58.643136  782285 fix.go:216] guest clock: 1765238698.610252562
	I1209 00:04:58.643146  782285 fix.go:229] Guest: 2025-12-09 00:04:58.610252562 +0000 UTC Remote: 2025-12-09 00:04:58.542211705 +0000 UTC m=+34.151197884 (delta=68.040857ms)
	I1209 00:04:58.643169  782285 fix.go:200] guest clock delta is within tolerance: 68.040857ms
	I1209 00:04:58.643175  782285 start.go:83] releasing machines lock for "kindnet-474683", held for 21.587322637s
	I1209 00:04:58.646600  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.646999  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.647032  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.647609  782285 ssh_runner.go:195] Run: cat /version.json
	I1209 00:04:58.647690  782285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 00:04:58.651064  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.651405  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.651499  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.651525  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.651708  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:04:58.652030  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.652055  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.652265  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:04:58.734021  782285 ssh_runner.go:195] Run: systemctl --version
	I1209 00:04:58.759150  782285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 00:04:58.915862  782285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 00:04:58.922295  782285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 00:04:58.922392  782285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 00:04:58.942570  782285 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 00:04:58.942599  782285 start.go:496] detecting cgroup driver to use...
	I1209 00:04:58.942672  782285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 00:04:58.963078  782285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 00:04:58.981260  782285 docker.go:218] disabling cri-docker service (if available) ...
	I1209 00:04:58.981338  782285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 00:04:59.005693  782285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 00:04:59.022252  782285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 00:04:59.178217  782285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 00:04:59.402267  782285 docker.go:234] disabling docker service ...
	I1209 00:04:59.402378  782285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 00:04:59.418525  782285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 00:04:59.438151  782285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 00:04:59.597807  782285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 00:04:59.746317  782285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 00:04:59.762484  782285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 00:04:59.784025  782285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 00:04:59.784090  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.798846  782285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 00:04:59.798922  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.815161  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.829320  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.842181  782285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 00:04:59.855393  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.869033  782285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.891395  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.903321  782285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 00:04:59.913383  782285 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 00:04:59.913448  782285 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 00:04:59.938818  782285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 00:04:59.953569  782285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:00.111095  782285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 00:05:00.234005  782285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 00:05:00.234093  782285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 00:05:00.239858  782285 start.go:564] Will wait 60s for crictl version
	I1209 00:05:00.239918  782285 ssh_runner.go:195] Run: which crictl
	I1209 00:05:00.244120  782285 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 00:05:00.282690  782285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 00:05:00.282779  782285 ssh_runner.go:195] Run: crio --version
	I1209 00:05:00.315622  782285 ssh_runner.go:195] Run: crio --version
	I1209 00:05:00.350420  782285 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1209 00:04:55.552247  782653 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:04:55.552306  782653 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 00:04:55.552318  782653 cache.go:65] Caching tarball of preloaded images
	I1209 00:04:55.552424  782653 preload.go:238] Found /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 00:04:55.552435  782653 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 00:04:55.552555  782653 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/config.json ...
	I1209 00:04:55.552768  782653 start.go:360] acquireMachinesLock for pause-165880: {Name:mk9f5a36f0f03c819637fd3ede2b02dca808c533 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	W1209 00:05:00.200840  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:02.201786  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:04:59.130668  782623 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/calico-474683.rawdisk...
	I1209 00:04:59.130714  782623 main.go:143] libmachine: Writing magic tar header
	I1209 00:04:59.130739  782623 main.go:143] libmachine: Writing SSH key tar header
	I1209 00:04:59.130827  782623 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683 ...
	I1209 00:04:59.130895  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683
	I1209 00:04:59.130934  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683 (perms=drwx------)
	I1209 00:04:59.130954  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube/machines
	I1209 00:04:59.130964  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube/machines (perms=drwxr-xr-x)
	I1209 00:04:59.130976  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube
	I1209 00:04:59.130987  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube (perms=drwxr-xr-x)
	I1209 00:04:59.130996  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871
	I1209 00:04:59.131006  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871 (perms=drwxrwxr-x)
	I1209 00:04:59.131018  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1209 00:04:59.131028  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 00:04:59.131038  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1209 00:04:59.131048  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 00:04:59.131056  782623 main.go:143] libmachine: checking permissions on dir: /home
	I1209 00:04:59.131073  782623 main.go:143] libmachine: skipping /home - not owner
	I1209 00:04:59.131079  782623 main.go:143] libmachine: defining domain...
	I1209 00:04:59.132581  782623 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>calico-474683</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/calico-474683.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-calico-474683'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1209 00:04:59.137749  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:02:f8:72 in network default
	I1209 00:04:59.138442  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:04:59.138461  782623 main.go:143] libmachine: starting domain...
	I1209 00:04:59.138466  782623 main.go:143] libmachine: ensuring networks are active...
	I1209 00:04:59.139517  782623 main.go:143] libmachine: Ensuring network default is active
	I1209 00:04:59.140024  782623 main.go:143] libmachine: Ensuring network mk-calico-474683 is active
	I1209 00:04:59.140766  782623 main.go:143] libmachine: getting domain XML...
	I1209 00:04:59.142117  782623 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>calico-474683</name>
	  <uuid>c724cfe1-c4be-40af-b04e-123a40e05065</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/calico-474683.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:87:7e:f5'/>
	      <source network='mk-calico-474683'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:02:f8:72'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1209 00:05:00.577655  782623 main.go:143] libmachine: waiting for domain to start...
	I1209 00:05:00.579096  782623 main.go:143] libmachine: domain is now running
	I1209 00:05:00.579113  782623 main.go:143] libmachine: waiting for IP...
	I1209 00:05:00.580060  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:00.580932  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:00.580947  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:00.581483  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:00.581532  782623 retry.go:31] will retry after 310.06074ms: waiting for domain to come up
	I1209 00:05:00.893393  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:00.894222  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:00.894242  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:00.894751  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:00.894792  782623 retry.go:31] will retry after 313.144808ms: waiting for domain to come up
	I1209 00:05:01.209631  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:01.210528  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:01.210551  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:01.211011  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:01.211058  782623 retry.go:31] will retry after 485.330957ms: waiting for domain to come up
	I1209 00:05:01.697767  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:01.698945  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:01.698991  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:01.699516  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:01.699570  782623 retry.go:31] will retry after 607.257691ms: waiting for domain to come up
	I1209 00:05:02.308576  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:02.309591  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:02.309637  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:02.310192  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:02.310269  782623 retry.go:31] will retry after 604.798902ms: waiting for domain to come up
	I1209 00:05:02.917437  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:02.918181  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:02.918220  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:02.918826  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:02.918879  782623 retry.go:31] will retry after 781.854699ms: waiting for domain to come up
	I1209 00:05:03.702766  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:03.703453  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:03.703474  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:03.703818  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:03.703868  782623 retry.go:31] will retry after 729.916129ms: waiting for domain to come up
	I1209 00:05:00.355750  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:00.356319  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:05:00.356345  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:00.356588  782285 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 00:05:00.361897  782285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 00:05:00.378374  782285 kubeadm.go:884] updating cluster {Name:kindnet-474683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:kindnet-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 00:05:00.378645  782285 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:05:00.378740  782285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:00.416483  782285 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1209 00:05:00.416572  782285 ssh_runner.go:195] Run: which lz4
	I1209 00:05:00.421532  782285 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 00:05:00.426517  782285 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 00:05:00.426552  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1209 00:05:01.730449  782285 crio.go:462] duration metric: took 1.308972968s to copy over tarball
	I1209 00:05:01.730555  782285 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 00:05:03.325802  782285 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.595199525s)
	I1209 00:05:03.325852  782285 crio.go:469] duration metric: took 1.595364014s to extract the tarball
	I1209 00:05:03.325862  782285 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 00:05:03.366821  782285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:03.407596  782285 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 00:05:03.407635  782285 cache_images.go:86] Images are preloaded, skipping loading
	I1209 00:05:03.407649  782285 kubeadm.go:935] updating node { 192.168.72.143 8443 v1.34.2 crio true true} ...
	I1209 00:05:03.407793  782285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-474683 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1209 00:05:03.407899  782285 ssh_runner.go:195] Run: crio config
	I1209 00:05:03.457353  782285 cni.go:84] Creating CNI manager for "kindnet"
	I1209 00:05:03.457411  782285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 00:05:03.457448  782285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.143 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-474683 NodeName:kindnet-474683 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 00:05:03.457609  782285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-474683"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.143"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.143"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 00:05:03.457675  782285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 00:05:03.471317  782285 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 00:05:03.471444  782285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 00:05:03.487451  782285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1209 00:05:03.515198  782285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 00:05:03.536443  782285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1209 00:05:03.557067  782285 ssh_runner.go:195] Run: grep 192.168.72.143	control-plane.minikube.internal$ /etc/hosts
	I1209 00:05:03.561548  782285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 00:05:03.577043  782285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:03.768934  782285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 00:05:03.802110  782285 certs.go:69] Setting up /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683 for IP: 192.168.72.143
	I1209 00:05:03.802138  782285 certs.go:195] generating shared ca certs ...
	I1209 00:05:03.802162  782285 certs.go:227] acquiring lock for ca certs: {Name:mk069bbba4d83d251409b18022ca36eb869d942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.802410  782285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key
	I1209 00:05:03.802455  782285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key
	I1209 00:05:03.802465  782285 certs.go:257] generating profile certs ...
	I1209 00:05:03.802525  782285 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.key
	I1209 00:05:03.802566  782285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt with IP's: []
	I1209 00:05:03.841772  782285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt ...
	I1209 00:05:03.841808  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: {Name:mkf8beacbae180036263c43894b1597797a1121c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.842044  782285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.key ...
	I1209 00:05:03.842061  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.key: {Name:mk9b0a991684bdb1b7696f637236dec087a7545a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.842177  782285 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key.9e46ac6c
	I1209 00:05:03.842201  782285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt.9e46ac6c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.143]
	I1209 00:05:03.910490  782285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt.9e46ac6c ...
	I1209 00:05:03.910529  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt.9e46ac6c: {Name:mkcae5ee52ba6232f17ee77420ed884c7f1e80b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.910747  782285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key.9e46ac6c ...
	I1209 00:05:03.910771  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key.9e46ac6c: {Name:mke546fbdbd6a15cfd24cf1c2dded658b8c332f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.910888  782285 certs.go:382] copying /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt.9e46ac6c -> /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt
	I1209 00:05:03.910990  782285 certs.go:386] copying /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key.9e46ac6c -> /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key
	I1209 00:05:03.911076  782285 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.key
	I1209 00:05:03.911099  782285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.crt with IP's: []
	I1209 00:05:04.073065  782285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.crt ...
	I1209 00:05:04.073099  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.crt: {Name:mk6b32511d664f4912c3c1309d42e491a99b7423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:04.073306  782285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.key ...
	I1209 00:05:04.073333  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.key: {Name:mk3dbcd8046995b2b44d3da48d1bce0bdb71117a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:04.073558  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem (1338 bytes)
	W1209 00:05:04.073605  782285 certs.go:480] ignoring /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930_empty.pem, impossibly tiny 0 bytes
	I1209 00:05:04.073613  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 00:05:04.073639  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem (1082 bytes)
	I1209 00:05:04.073661  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem (1123 bytes)
	I1209 00:05:04.073684  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem (1675 bytes)
	I1209 00:05:04.073720  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:04.074379  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 00:05:04.109105  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 00:05:04.146921  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 00:05:04.176689  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 00:05:04.210729  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 00:05:04.248785  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 00:05:04.284560  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 00:05:04.319642  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 00:05:04.354931  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 00:05:04.387335  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem --> /usr/share/ca-certificates/748930.pem (1338 bytes)
	I1209 00:05:04.420779  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /usr/share/ca-certificates/7489302.pem (1708 bytes)
	I1209 00:05:04.450495  782285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 00:05:04.479190  782285 ssh_runner.go:195] Run: openssl version
	I1209 00:05:04.486299  782285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:04.499696  782285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 00:05:04.513628  782285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:04.520151  782285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:04.520230  782285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:04.527977  782285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 00:05:04.540351  782285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 00:05:04.552407  782285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/748930.pem
	I1209 00:05:04.563990  782285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/748930.pem /etc/ssl/certs/748930.pem
	I1209 00:05:04.576544  782285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748930.pem
	I1209 00:05:04.581867  782285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 23:15 /usr/share/ca-certificates/748930.pem
	I1209 00:05:04.581941  782285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748930.pem
	I1209 00:05:04.589102  782285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 00:05:04.600785  782285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/748930.pem /etc/ssl/certs/51391683.0
	I1209 00:05:04.614705  782285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7489302.pem
	I1209 00:05:04.625900  782285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7489302.pem /etc/ssl/certs/7489302.pem
	I1209 00:05:04.637169  782285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7489302.pem
	I1209 00:05:04.642228  782285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 23:15 /usr/share/ca-certificates/7489302.pem
	I1209 00:05:04.642297  782285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7489302.pem
	I1209 00:05:04.649947  782285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 00:05:04.661599  782285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7489302.pem /etc/ssl/certs/3ec20f2e.0
	I1209 00:05:04.673261  782285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 00:05:04.678504  782285 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 00:05:04.678567  782285 kubeadm.go:401] StartCluster: {Name:kindnet-474683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:kindnet-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:05:04.678647  782285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 00:05:04.678722  782285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 00:05:04.714521  782285 cri.go:89] found id: ""
	I1209 00:05:04.714606  782285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 00:05:04.726557  782285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 00:05:04.739557  782285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 00:05:04.752755  782285 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 00:05:04.752779  782285 kubeadm.go:158] found existing configuration files:
	
	I1209 00:05:04.752843  782285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 00:05:04.766955  782285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 00:05:04.767016  782285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 00:05:04.779536  782285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 00:05:04.792061  782285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 00:05:04.792126  782285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 00:05:04.804256  782285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 00:05:04.815512  782285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 00:05:04.815592  782285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 00:05:04.828012  782285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 00:05:04.838849  782285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 00:05:04.838931  782285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 00:05:04.850356  782285 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 00:05:04.897812  782285 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 00:05:04.897879  782285 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 00:05:04.992038  782285 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 00:05:04.992214  782285 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 00:05:04.992389  782285 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 00:05:05.002318  782285 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1209 00:05:04.701606  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:07.201154  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:04.436043  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:04.437097  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:04.437122  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:04.437650  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:04.437713  782623 retry.go:31] will retry after 1.093672032s: waiting for domain to come up
	I1209 00:05:05.533103  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:05.533822  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:05.533844  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:05.534236  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:05.534276  782623 retry.go:31] will retry after 1.405536599s: waiting for domain to come up
	I1209 00:05:06.942037  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:06.942883  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:06.942921  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:06.943412  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:06.943468  782623 retry.go:31] will retry after 1.43839653s: waiting for domain to come up
	I1209 00:05:08.383306  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:08.383933  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:08.383950  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:08.384316  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:08.384356  782623 retry.go:31] will retry after 2.211169168s: waiting for domain to come up
	I1209 00:05:05.006499  782285 out.go:252]   - Generating certificates and keys ...
	I1209 00:05:05.006585  782285 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 00:05:05.006684  782285 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 00:05:05.126718  782285 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 00:05:05.251499  782285 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 00:05:05.325788  782285 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 00:05:05.682856  782285 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 00:05:06.138114  782285 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 00:05:06.138260  782285 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-474683 localhost] and IPs [192.168.72.143 127.0.0.1 ::1]
	I1209 00:05:06.441765  782285 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 00:05:06.442672  782285 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-474683 localhost] and IPs [192.168.72.143 127.0.0.1 ::1]
	I1209 00:05:06.561232  782285 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 00:05:07.489279  782285 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 00:05:07.663920  782285 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 00:05:07.664180  782285 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 00:05:08.034913  782285 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 00:05:08.897713  782285 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 00:05:09.097089  782285 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 00:05:09.359985  782285 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 00:05:09.482574  782285 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 00:05:09.483278  782285 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 00:05:09.485757  782285 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1209 00:05:09.700856  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:12.200649  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:10.597522  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:10.598216  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:10.598236  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:10.598702  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:10.598748  782623 retry.go:31] will retry after 3.491313112s: waiting for domain to come up
	I1209 00:05:09.487394  782285 out.go:252]   - Booting up control plane ...
	I1209 00:05:09.487539  782285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 00:05:09.487650  782285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 00:05:09.487814  782285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 00:05:09.511914  782285 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 00:05:09.512095  782285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 00:05:09.519838  782285 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 00:05:09.520280  782285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 00:05:09.520496  782285 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 00:05:09.726767  782285 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 00:05:09.726940  782285 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 00:05:10.228007  782285 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.618996ms
	I1209 00:05:10.233314  782285 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 00:05:10.233477  782285 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.143:8443/livez
	I1209 00:05:10.233605  782285 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 00:05:10.233699  782285 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 00:05:13.028668  782285 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.797197022s
	I1209 00:05:14.088109  782285 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.857452147s
	I1209 00:05:15.731591  782285 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501678478s
	I1209 00:05:15.751587  782285 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 00:05:15.771624  782285 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 00:05:15.787433  782285 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 00:05:15.787663  782285 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-474683 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 00:05:15.797968  782285 kubeadm.go:319] [bootstrap-token] Using token: 5wug0n.476zgzlpe1a8r7t2
	I1209 00:05:15.800240  782285 out.go:252]   - Configuring RBAC rules ...
	I1209 00:05:15.800383  782285 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 00:05:15.803844  782285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 00:05:15.810173  782285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 00:05:15.813775  782285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 00:05:15.817473  782285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 00:05:15.822760  782285 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 00:05:16.137722  782285 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 00:05:16.585958  782285 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 00:05:17.137495  782285 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 00:05:17.138255  782285 kubeadm.go:319] 
	I1209 00:05:17.138382  782285 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 00:05:17.138397  782285 kubeadm.go:319] 
	I1209 00:05:17.138519  782285 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 00:05:17.138532  782285 kubeadm.go:319] 
	I1209 00:05:17.138568  782285 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 00:05:17.138685  782285 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 00:05:17.138765  782285 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 00:05:17.138782  782285 kubeadm.go:319] 
	I1209 00:05:17.138875  782285 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 00:05:17.138901  782285 kubeadm.go:319] 
	I1209 00:05:17.138977  782285 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 00:05:17.138989  782285 kubeadm.go:319] 
	I1209 00:05:17.139062  782285 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 00:05:17.139166  782285 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 00:05:17.139222  782285 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 00:05:17.139228  782285 kubeadm.go:319] 
	I1209 00:05:17.139305  782285 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 00:05:17.139381  782285 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 00:05:17.139388  782285 kubeadm.go:319] 
	I1209 00:05:17.139452  782285 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5wug0n.476zgzlpe1a8r7t2 \
	I1209 00:05:17.139547  782285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b505ea1d51a5916e1e34daedc053d9e1cdc4c18fb7af3859a1471c943bb62a6a \
	I1209 00:05:17.139589  782285 kubeadm.go:319] 	--control-plane 
	I1209 00:05:17.139599  782285 kubeadm.go:319] 
	I1209 00:05:17.139712  782285 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 00:05:17.139721  782285 kubeadm.go:319] 
	I1209 00:05:17.139803  782285 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5wug0n.476zgzlpe1a8r7t2 \
	I1209 00:05:17.139899  782285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b505ea1d51a5916e1e34daedc053d9e1cdc4c18fb7af3859a1471c943bb62a6a 
	I1209 00:05:17.141470  782285 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 00:05:17.141514  782285 cni.go:84] Creating CNI manager for "kindnet"
	I1209 00:05:17.143711  782285 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1209 00:05:14.699588  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:16.700215  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:14.092135  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:14.092915  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:14.092935  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:14.093404  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:14.093442  782623 retry.go:31] will retry after 3.91631774s: waiting for domain to come up
	I1209 00:05:18.011206  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.012145  782623 main.go:143] libmachine: domain calico-474683 has current primary IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.012166  782623 main.go:143] libmachine: found domain IP: 192.168.50.66
	I1209 00:05:18.012192  782623 main.go:143] libmachine: reserving static IP address...
	I1209 00:05:18.012668  782623 main.go:143] libmachine: unable to find host DHCP lease matching {name: "calico-474683", mac: "52:54:00:87:7e:f5", ip: "192.168.50.66"} in network mk-calico-474683
	I1209 00:05:18.315130  782623 main.go:143] libmachine: reserved static IP address 192.168.50.66 for domain calico-474683
	I1209 00:05:18.315161  782623 main.go:143] libmachine: waiting for SSH...
	I1209 00:05:18.315169  782623 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 00:05:18.318822  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.319494  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.319537  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.319768  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:18.320102  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:18.320120  782623 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 00:05:18.425256  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:05:18.425684  782623 main.go:143] libmachine: domain creation complete
	I1209 00:05:18.427342  782623 machine.go:94] provisionDockerMachine start ...
	I1209 00:05:18.430074  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.430461  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.430485  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.430724  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:18.430974  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:18.430988  782623 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 00:05:18.535768  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 00:05:18.535798  782623 buildroot.go:166] provisioning hostname "calico-474683"
	I1209 00:05:18.538984  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.539405  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.539431  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.539643  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:18.539912  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:18.539926  782623 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-474683 && echo "calico-474683" | sudo tee /etc/hostname
	I1209 00:05:18.667176  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-474683
	
	I1209 00:05:18.670783  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.671220  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.671245  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.671448  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:18.671687  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:18.671704  782623 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-474683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-474683/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-474683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 00:05:18.788084  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:05:18.788128  782623 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1209 00:05:18.788162  782623 buildroot.go:174] setting up certificates
	I1209 00:05:18.788177  782623 provision.go:84] configureAuth start
	I1209 00:05:18.791396  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.791903  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.791962  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.794511  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.794848  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.794867  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.794973  782623 provision.go:143] copyHostCerts
	I1209 00:05:18.795030  782623 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem, removing ...
	I1209 00:05:18.795040  782623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem
	I1209 00:05:18.795108  782623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1209 00:05:18.795195  782623 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem, removing ...
	I1209 00:05:18.795202  782623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem
	I1209 00:05:18.795227  782623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1209 00:05:18.795286  782623 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem, removing ...
	I1209 00:05:18.795293  782623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem
	I1209 00:05:18.795314  782623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1209 00:05:18.795373  782623 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.calico-474683 san=[127.0.0.1 192.168.50.66 calico-474683 localhost minikube]
	I1209 00:05:18.988521  782623 provision.go:177] copyRemoteCerts
	I1209 00:05:18.988585  782623 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 00:05:18.991720  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.992112  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.992143  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.992297  782623 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa Username:docker}
	I1209 00:05:17.144825  782285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 00:05:17.150494  782285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 00:05:17.150515  782285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 00:05:17.177487  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 00:05:17.444475  782285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 00:05:17.444572  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:17.444595  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-474683 minikube.k8s.io/updated_at=2025_12_09T00_05_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2846307350d09469fc6b6b47dd0c4837fa740d9c minikube.k8s.io/name=kindnet-474683 minikube.k8s.io/primary=true
	I1209 00:05:17.472705  782285 ops.go:34] apiserver oom_adj: -16
	I1209 00:05:17.552826  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:18.053613  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:18.553273  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:19.053157  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:19.665568  782653 start.go:364] duration metric: took 24.112756677s to acquireMachinesLock for "pause-165880"
	I1209 00:05:19.665613  782653 start.go:96] Skipping create...Using existing machine configuration
	I1209 00:05:19.665627  782653 fix.go:54] fixHost starting: 
	I1209 00:05:19.668343  782653 fix.go:112] recreateIfNeeded on pause-165880: state=Running err=<nil>
	W1209 00:05:19.668411  782653 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 00:05:19.670388  782653 out.go:252] * Updating the running kvm2 "pause-165880" VM ...
	I1209 00:05:19.670427  782653 machine.go:94] provisionDockerMachine start ...
	I1209 00:05:19.674341  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.674886  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.674928  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.675273  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.675624  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.675652  782653 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 00:05:19.791731  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-165880
	
	I1209 00:05:19.791773  782653 buildroot.go:166] provisioning hostname "pause-165880"
	I1209 00:05:19.795624  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.796205  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.796234  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.796514  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.796747  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.796759  782653 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-165880 && echo "pause-165880" | sudo tee /etc/hostname
	I1209 00:05:19.936746  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-165880
	
	I1209 00:05:19.940045  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.940462  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.940493  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.940654  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.940846  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.940860  782653 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-165880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-165880/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-165880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 00:05:20.060582  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:05:20.060614  782653 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1209 00:05:20.060650  782653 buildroot.go:174] setting up certificates
	I1209 00:05:20.060664  782653 provision.go:84] configureAuth start
	I1209 00:05:20.065295  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.066045  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.066090  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.069288  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.069780  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.069809  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.070050  782653 provision.go:143] copyHostCerts
	I1209 00:05:20.070117  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem, removing ...
	I1209 00:05:20.070131  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem
	I1209 00:05:20.070204  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1209 00:05:20.070358  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem, removing ...
	I1209 00:05:20.070393  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem
	I1209 00:05:20.070432  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1209 00:05:20.070548  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem, removing ...
	I1209 00:05:20.070561  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem
	I1209 00:05:20.070599  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1209 00:05:20.070687  782653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.pause-165880 san=[127.0.0.1 192.168.83.217 localhost minikube pause-165880]
	I1209 00:05:20.171275  782653 provision.go:177] copyRemoteCerts
	I1209 00:05:20.171338  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 00:05:20.174350  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.174927  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.174953  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.175169  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:20.271573  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 00:05:20.314206  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1209 00:05:20.346866  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 00:05:20.384460  782653 provision.go:87] duration metric: took 323.774611ms to configureAuth
	I1209 00:05:20.384496  782653 buildroot.go:189] setting minikube options for container-runtime
	I1209 00:05:20.384810  782653 config.go:182] Loaded profile config "pause-165880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:05:20.387997  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.388483  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.388520  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.388698  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:20.388903  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:20.388917  782653 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 00:05:19.075354  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 00:05:19.104938  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 00:05:19.140399  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 00:05:19.169734  782623 provision.go:87] duration metric: took 381.538879ms to configureAuth
	I1209 00:05:19.169770  782623 buildroot.go:189] setting minikube options for container-runtime
	I1209 00:05:19.170004  782623 config.go:182] Loaded profile config "calico-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:05:19.173022  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.173467  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.173490  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.173695  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.173924  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:19.173943  782623 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 00:05:19.411888  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 00:05:19.411928  782623 machine.go:97] duration metric: took 984.566314ms to provisionDockerMachine
	I1209 00:05:19.411944  782623 client.go:176] duration metric: took 20.766069419s to LocalClient.Create
	I1209 00:05:19.411968  782623 start.go:167] duration metric: took 20.766154983s to libmachine.API.Create "calico-474683"
	I1209 00:05:19.411979  782623 start.go:293] postStartSetup for "calico-474683" (driver="kvm2")
	I1209 00:05:19.411994  782623 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 00:05:19.412087  782623 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 00:05:19.415350  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.415803  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.415831  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.415996  782623 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa Username:docker}
	I1209 00:05:19.499356  782623 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 00:05:19.504238  782623 info.go:137] Remote host: Buildroot 2025.02
	I1209 00:05:19.504276  782623 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1209 00:05:19.504351  782623 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1209 00:05:19.504469  782623 filesync.go:149] local asset: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem -> 7489302.pem in /etc/ssl/certs
	I1209 00:05:19.504593  782623 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 00:05:19.516545  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:19.547605  782623 start.go:296] duration metric: took 135.602984ms for postStartSetup
	I1209 00:05:19.551043  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.551592  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.551636  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.551909  782623 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/config.json ...
	I1209 00:05:19.552130  782623 start.go:128] duration metric: took 20.908623666s to createHost
	I1209 00:05:19.555033  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.555550  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.555583  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.555826  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.556132  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:19.556153  782623 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 00:05:19.665405  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765238719.637112106
	
	I1209 00:05:19.665432  782623 fix.go:216] guest clock: 1765238719.637112106
	I1209 00:05:19.665443  782623 fix.go:229] Guest: 2025-12-09 00:05:19.637112106 +0000 UTC Remote: 2025-12-09 00:05:19.55215123 +0000 UTC m=+25.556397708 (delta=84.960876ms)
	I1209 00:05:19.665462  782623 fix.go:200] guest clock delta is within tolerance: 84.960876ms
	I1209 00:05:19.665467  782623 start.go:83] releasing machines lock for "calico-474683", held for 21.022157975s
	I1209 00:05:19.669335  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.669872  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.669908  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.670603  782623 ssh_runner.go:195] Run: cat /version.json
	I1209 00:05:19.670683  782623 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 00:05:19.674184  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.674500  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.674774  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.674813  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.674997  782623 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa Username:docker}
	I1209 00:05:19.675254  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.675289  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.675531  782623 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa Username:docker}
	I1209 00:05:19.778693  782623 ssh_runner.go:195] Run: systemctl --version
	I1209 00:05:19.785108  782623 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 00:05:19.946927  782623 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 00:05:19.954729  782623 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 00:05:19.954819  782623 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 00:05:19.977397  782623 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 00:05:19.977421  782623 start.go:496] detecting cgroup driver to use...
	I1209 00:05:19.977510  782623 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 00:05:19.998858  782623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 00:05:20.017676  782623 docker.go:218] disabling cri-docker service (if available) ...
	I1209 00:05:20.017743  782623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 00:05:20.038240  782623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 00:05:20.057453  782623 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 00:05:20.221815  782623 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 00:05:20.466241  782623 docker.go:234] disabling docker service ...
	I1209 00:05:20.466316  782623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 00:05:20.483841  782623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 00:05:20.500194  782623 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 00:05:20.685006  782623 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 00:05:20.835571  782623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 00:05:20.852594  782623 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 00:05:20.876595  782623 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 00:05:20.876709  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.888987  782623 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 00:05:20.889049  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.902380  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.917003  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.933201  782623 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 00:05:20.946967  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.960480  782623 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.984081  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.998395  782623 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 00:05:21.009435  782623 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 00:05:21.009511  782623 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 00:05:21.037579  782623 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 00:05:21.053001  782623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:21.233232  782623 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 00:05:21.359183  782623 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 00:05:21.359279  782623 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 00:05:21.365045  782623 start.go:564] Will wait 60s for crictl version
	I1209 00:05:21.365120  782623 ssh_runner.go:195] Run: which crictl
	I1209 00:05:21.369237  782623 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 00:05:21.402947  782623 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 00:05:21.403038  782623 ssh_runner.go:195] Run: crio --version
	I1209 00:05:21.434746  782623 ssh_runner.go:195] Run: crio --version
	I1209 00:05:21.469197  782623 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1209 00:05:19.552962  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:20.053230  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:20.553604  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:21.053765  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:21.553623  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:22.053153  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:22.170222  782285 kubeadm.go:1114] duration metric: took 4.725729945s to wait for elevateKubeSystemPrivileges
	I1209 00:05:22.170285  782285 kubeadm.go:403] duration metric: took 17.491717988s to StartCluster
	I1209 00:05:22.170314  782285 settings.go:142] acquiring lock: {Name:mk01a7d116accfccda14c363bded9d7c0216d454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:22.170447  782285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1209 00:05:22.172335  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/kubeconfig: {Name:mk0db57d03f858808a26818547681e8d59b0a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:22.172643  782285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 00:05:22.172676  782285 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 00:05:22.172798  782285 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 00:05:22.172893  782285 addons.go:70] Setting storage-provisioner=true in profile "kindnet-474683"
	I1209 00:05:22.172911  782285 addons.go:239] Setting addon storage-provisioner=true in "kindnet-474683"
	I1209 00:05:22.172932  782285 config.go:182] Loaded profile config "kindnet-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:05:22.172954  782285 host.go:66] Checking if "kindnet-474683" exists ...
	I1209 00:05:22.172983  782285 addons.go:70] Setting default-storageclass=true in profile "kindnet-474683"
	I1209 00:05:22.172996  782285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-474683"
	I1209 00:05:22.174805  782285 out.go:179] * Verifying Kubernetes components...
	I1209 00:05:22.176019  782285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:22.177140  782285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 00:05:22.178044  782285 addons.go:239] Setting addon default-storageclass=true in "kindnet-474683"
	I1209 00:05:22.178087  782285 host.go:66] Checking if "kindnet-474683" exists ...
	I1209 00:05:22.178379  782285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 00:05:22.178397  782285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 00:05:22.180538  782285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 00:05:22.180559  782285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 00:05:22.183323  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:22.184062  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:05:22.184118  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:22.184417  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:05:22.184996  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:22.186001  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:05:22.186040  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:22.186266  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:05:22.433854  782285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 00:05:22.530225  782285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 00:05:22.710987  782285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 00:05:22.756262  782285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 00:05:22.877417  782285 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1209 00:05:22.878841  782285 node_ready.go:35] waiting up to 15m0s for node "kindnet-474683" to be "Ready" ...
	I1209 00:05:23.391178  782285 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-474683" context rescaled to 1 replicas
	I1209 00:05:23.598055  782285 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1209 00:05:19.200298  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:21.200565  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:23.201257  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:21.473193  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:21.473702  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:21.473733  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:21.473960  782623 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 00:05:21.478713  782623 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 00:05:21.494058  782623 kubeadm.go:884] updating cluster {Name:calico-474683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:calico-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.66 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 00:05:21.494207  782623 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:05:21.494261  782623 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:21.531065  782623 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1209 00:05:21.531149  782623 ssh_runner.go:195] Run: which lz4
	I1209 00:05:21.535690  782623 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 00:05:21.540611  782623 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 00:05:21.540649  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1209 00:05:23.599260  782285 addons.go:530] duration metric: took 1.426451881s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 00:05:26.003810  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 00:05:26.003841  782653 machine.go:97] duration metric: took 6.33340561s to provisionDockerMachine
	I1209 00:05:26.003854  782653 start.go:293] postStartSetup for "pause-165880" (driver="kvm2")
	I1209 00:05:26.003864  782653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 00:05:26.003941  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 00:05:26.007221  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.007720  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.007781  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.007981  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.100638  782653 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 00:05:26.105932  782653 info.go:137] Remote host: Buildroot 2025.02
	I1209 00:05:26.105968  782653 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1209 00:05:26.106049  782653 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1209 00:05:26.106130  782653 filesync.go:149] local asset: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem -> 7489302.pem in /etc/ssl/certs
	I1209 00:05:26.106227  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 00:05:26.123738  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:26.167380  782653 start.go:296] duration metric: took 163.489508ms for postStartSetup
	I1209 00:05:26.167445  782653 fix.go:56] duration metric: took 6.501816173s for fixHost
	I1209 00:05:26.171923  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.172486  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.172518  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.172775  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:26.173094  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:26.173118  782653 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 00:05:26.293758  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765238726.290651991
	
	I1209 00:05:26.293787  782653 fix.go:216] guest clock: 1765238726.290651991
	I1209 00:05:26.293797  782653 fix.go:229] Guest: 2025-12-09 00:05:26.290651991 +0000 UTC Remote: 2025-12-09 00:05:26.167452687 +0000 UTC m=+30.731624268 (delta=123.199304ms)
	I1209 00:05:26.293823  782653 fix.go:200] guest clock delta is within tolerance: 123.199304ms
	I1209 00:05:26.293829  782653 start.go:83] releasing machines lock for "pause-165880", held for 6.628237017s
	I1209 00:05:26.297200  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.297750  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.297786  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.298435  782653 ssh_runner.go:195] Run: cat /version.json
	I1209 00:05:26.298534  782653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 00:05:26.302194  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302574  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302770  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.302815  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302991  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.303012  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.303153  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.303414  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.388338  782653 ssh_runner.go:195] Run: systemctl --version
	I1209 00:05:26.411503  782653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 00:05:26.564483  782653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 00:05:26.577338  782653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 00:05:26.577435  782653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 00:05:26.589629  782653 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 00:05:26.589669  782653 start.go:496] detecting cgroup driver to use...
	I1209 00:05:26.589771  782653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 00:05:26.614167  782653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 00:05:26.634398  782653 docker.go:218] disabling cri-docker service (if available) ...
	I1209 00:05:26.634551  782653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 00:05:26.655828  782653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 00:05:26.677740  782653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 00:05:26.879759  782653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 00:05:27.075050  782653 docker.go:234] disabling docker service ...
	I1209 00:05:27.075148  782653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 00:05:27.108544  782653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 00:05:27.128174  782653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 00:05:27.333496  782653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 00:05:27.527709  782653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 00:05:27.547600  782653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 00:05:27.573078  782653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 00:05:27.573176  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.591439  782653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 00:05:27.591536  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.610214  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.624565  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.637537  782653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 00:05:27.652581  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.667490  782653 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.683625  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.699870  782653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 00:05:27.713298  782653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 00:05:27.726610  782653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:27.922280  782653 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 00:05:28.154862  782653 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 00:05:28.154956  782653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 00:05:28.160673  782653 start.go:564] Will wait 60s for crictl version
	I1209 00:05:28.160757  782653 ssh_runner.go:195] Run: which crictl
	I1209 00:05:28.165831  782653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 00:05:28.203701  782653 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 00:05:28.203843  782653 ssh_runner.go:195] Run: crio --version
	I1209 00:05:28.238662  782653 ssh_runner.go:195] Run: crio --version
	I1209 00:05:28.282458  782653 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	W1209 00:05:25.205127  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:27.701719  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:24.565207  782623 crio.go:462] duration metric: took 3.029526862s to copy over tarball
	I1209 00:05:24.565324  782623 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W1209 00:05:24.886171  782285 node_ready.go:57] node "kindnet-474683" has "Ready":"False" status (will retry)
	W1209 00:05:27.383880  782285 node_ready.go:57] node "kindnet-474683" has "Ready":"False" status (will retry)
	I1209 00:05:28.287928  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:28.288417  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:28.288452  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:28.288697  782653 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1209 00:05:28.295003  782653 kubeadm.go:884] updating cluster {Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 00:05:28.295164  782653 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:05:28.295231  782653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:28.342780  782653 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 00:05:28.342817  782653 crio.go:433] Images already preloaded, skipping extraction
	I1209 00:05:28.342903  782653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:28.378433  782653 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 00:05:28.378469  782653 cache_images.go:86] Images are preloaded, skipping loading
	I1209 00:05:28.378482  782653 kubeadm.go:935] updating node { 192.168.83.217 8443 v1.34.2 crio true true} ...
	I1209 00:05:28.378663  782653 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-165880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 00:05:28.378778  782653 ssh_runner.go:195] Run: crio config
	I1209 00:05:28.437108  782653 cni.go:84] Creating CNI manager for ""
	I1209 00:05:28.437142  782653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 00:05:28.437168  782653 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 00:05:28.437201  782653 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.217 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-165880 NodeName:pause-165880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 00:05:28.437474  782653 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-165880"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 00:05:28.437593  782653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 00:05:28.453634  782653 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 00:05:28.453724  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 00:05:28.471239  782653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 00:05:28.493830  782653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 00:05:28.520139  782653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1209 00:05:28.549492  782653 ssh_runner.go:195] Run: grep 192.168.83.217	control-plane.minikube.internal$ /etc/hosts
	I1209 00:05:28.554579  782653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:28.753857  782653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 00:05:28.773412  782653 certs.go:69] Setting up /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880 for IP: 192.168.83.217
	I1209 00:05:28.773448  782653 certs.go:195] generating shared ca certs ...
	I1209 00:05:28.773475  782653 certs.go:227] acquiring lock for ca certs: {Name:mk069bbba4d83d251409b18022ca36eb869d942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:28.773724  782653 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key
	I1209 00:05:28.773877  782653 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key
	I1209 00:05:28.773921  782653 certs.go:257] generating profile certs ...
	I1209 00:05:28.774082  782653 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/client.key
	I1209 00:05:28.774272  782653 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.key.66e6a13d
	I1209 00:05:28.774378  782653 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.key
	I1209 00:05:28.774576  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem (1338 bytes)
	W1209 00:05:28.774636  782653 certs.go:480] ignoring /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930_empty.pem, impossibly tiny 0 bytes
	I1209 00:05:28.774654  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 00:05:28.774697  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem (1082 bytes)
	I1209 00:05:28.774736  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem (1123 bytes)
	I1209 00:05:28.774784  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem (1675 bytes)
	I1209 00:05:28.774872  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:28.776246  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 00:05:28.810505  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 00:05:28.842442  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 00:05:28.877226  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 00:05:28.908400  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 00:05:28.945631  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 00:05:28.979242  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 00:05:29.010043  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 00:05:29.051848  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem --> /usr/share/ca-certificates/748930.pem (1338 bytes)
	I1209 00:05:29.095619  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /usr/share/ca-certificates/7489302.pem (1708 bytes)
	I1209 00:05:29.133913  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 00:05:29.167271  782653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 00:05:29.189858  782653 ssh_runner.go:195] Run: openssl version
	I1209 00:05:29.197275  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.213666  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7489302.pem /etc/ssl/certs/7489302.pem
	I1209 00:05:29.226518  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.233152  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 23:15 /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.233284  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.240720  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 00:05:29.257715  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.273541  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 00:05:29.287565  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.293576  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.293641  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.301507  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 00:05:29.317647  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.340472  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/748930.pem /etc/ssl/certs/748930.pem
	I1209 00:05:29.392662  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.417247  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 23:15 /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.417320  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.436429  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 00:05:29.462985  782653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 00:05:29.472337  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 00:05:29.490957  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 00:05:29.504936  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 00:05:29.522957  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 00:05:29.538865  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 00:05:29.563145  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 00:05:29.581605  782653 kubeadm.go:401] StartCluster: {Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:05:29.581729  782653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 00:05:29.581823  782653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 00:05:29.688653  782653 cri.go:89] found id: "ed65d6af397731b1f5197ca1ee72a10abb2e0c22f62636e7bf2f7991071908cd"
	I1209 00:05:29.688692  782653 cri.go:89] found id: "9d522f4cf939d18e1de8df559158d043f98ae2ae01d8e14fe19b99d12c966f9f"
	I1209 00:05:29.688699  782653 cri.go:89] found id: "1797d0193cbe8ccd00b871fd19c9db605c89849a37a5010a5b0afa9022e4bf5f"
	I1209 00:05:29.688704  782653 cri.go:89] found id: "f00f2f5cffabec2b84bd23963ef53056ad87c8c1144d913e8afc9138caa5aa55"
	I1209 00:05:29.688709  782653 cri.go:89] found id: "db99b4ce7c7601a2d364718d8dd4fd7d04ea390b975cdec540ad671bbacaff1a"
	I1209 00:05:29.688728  782653 cri.go:89] found id: "3b25232d3c3957b7529e17f93abc0620cdd1d4bfa51469cdb8094edfce1aa828"
	I1209 00:05:29.688734  782653 cri.go:89] found id: ""
	I1209 00:05:29.688797  782653 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-165880 -n pause-165880
helpers_test.go:269: (dbg) Run:  kubectl --context pause-165880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-165880 -n pause-165880
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-165880 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-165880 logs -n 25: (1.542580981s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-316150 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-316150 │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │                     │
	│ delete  │ -p stopped-upgrade-316150                                                                                                                                   │ stopped-upgrade-316150 │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:04 UTC │
	│ start   │ -p kindnet-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-474683         │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:05 UTC │
	│ delete  │ -p cert-expiration-134582                                                                                                                                   │ cert-expiration-134582 │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:04 UTC │
	│ start   │ -p calico-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio                        │ calico-474683          │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │                     │
	│ start   │ -p pause-165880 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-165880           │ jenkins │ v1.37.0 │ 09 Dec 25 00:04 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 pgrep -a kubelet                                                                                                                             │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p kindnet-474683 pgrep -a kubelet                                                                                                                          │ kindnet-474683         │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo cat /etc/nsswitch.conf                                                                                                                  │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo cat /etc/hosts                                                                                                                          │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo cat /etc/resolv.conf                                                                                                                    │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo crictl pods                                                                                                                             │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo crictl ps --all                                                                                                                         │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                  │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo ip a s                                                                                                                                  │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo ip r s                                                                                                                                  │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo iptables-save                                                                                                                           │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo iptables -t nat -L -n -v                                                                                                                │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo systemctl status kubelet --all --full --no-pager                                                                                        │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo systemctl cat kubelet --no-pager                                                                                                        │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                         │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo cat /etc/kubernetes/kubelet.conf                                                                                                        │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo cat /var/lib/kubelet/config.yaml                                                                                                        │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	│ ssh     │ -p auto-474683 sudo systemctl status docker --all --full --no-pager                                                                                         │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │                     │
	│ ssh     │ -p auto-474683 sudo systemctl cat docker --no-pager                                                                                                         │ auto-474683            │ jenkins │ v1.37.0 │ 09 Dec 25 00:05 UTC │ 09 Dec 25 00:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 00:04:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 00:04:55.497792  782653 out.go:360] Setting OutFile to fd 1 ...
	I1209 00:04:55.497905  782653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 00:04:55.497917  782653 out.go:374] Setting ErrFile to fd 2...
	I1209 00:04:55.497923  782653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 00:04:55.498170  782653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1209 00:04:55.498699  782653 out.go:368] Setting JSON to false
	I1209 00:04:55.499711  782653 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10035,"bootTime":1765228660,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 00:04:55.499775  782653 start.go:143] virtualization: kvm guest
	I1209 00:04:55.501320  782653 out.go:179] * [pause-165880] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 00:04:55.502424  782653 out.go:179]   - MINIKUBE_LOCATION=22075
	I1209 00:04:55.502444  782653 notify.go:221] Checking for updates...
	I1209 00:04:55.504965  782653 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 00:04:55.505986  782653 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1209 00:04:55.506972  782653 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1209 00:04:55.508046  782653 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 00:04:55.509063  782653 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 00:04:55.510716  782653 config.go:182] Loaded profile config "pause-165880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:04:55.511456  782653 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 00:04:55.547113  782653 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 00:04:55.548094  782653 start.go:309] selected driver: kvm2
	I1209 00:04:55.548112  782653 start.go:927] validating driver "kvm2" against &{Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:04:55.548282  782653 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 00:04:55.549299  782653 cni.go:84] Creating CNI manager for ""
	I1209 00:04:55.549406  782653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 00:04:55.549484  782653 start.go:353] cluster config:
	{Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:04:55.549643  782653 iso.go:125] acquiring lock: {Name:mk3f3df5ef11b93dcc62a5800b46f2775cc6cbb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 00:04:55.551246  782653 out.go:179] * Starting "pause-165880" primary control-plane node in "pause-165880" cluster
	I1209 00:04:54.124280  782623 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:04:54.124326  782623 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 00:04:54.124346  782623 cache.go:65] Caching tarball of preloaded images
	I1209 00:04:54.124486  782623 preload.go:238] Found /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 00:04:54.124507  782623 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 00:04:54.124660  782623 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/config.json ...
	I1209 00:04:54.124695  782623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/config.json: {Name:mka35ef7265cdc8907f55aafe10eb574e8505e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:04:54.124911  782623 start.go:360] acquireMachinesLock for calico-474683: {Name:mk9f5a36f0f03c819637fd3ede2b02dca808c533 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 00:04:58.643259  782623 start.go:364] duration metric: took 4.518306838s to acquireMachinesLock for "calico-474683"
	I1209 00:04:58.643347  782623 start.go:93] Provisioning new machine with config: &{Name:calico-474683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:calico-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 00:04:58.643490  782623 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 00:04:54.695305  781906 addons.go:530] duration metric: took 1.113476368s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 00:04:54.698055  781906 system_pods.go:59] 8 kube-system pods found
	I1209 00:04:54.698121  781906 system_pods.go:61] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.698144  781906 system_pods.go:61] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.698162  781906 system_pods.go:61] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 00:04:54.698175  781906 system_pods.go:61] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:54.698244  781906 system_pods.go:61] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:54.698265  781906 system_pods.go:61] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:54.698292  781906 system_pods.go:61] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:54.698308  781906 system_pods.go:61] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending
	I1209 00:04:54.698323  781906 system_pods.go:74] duration metric: took 5.933243ms to wait for pod list to return data ...
	I1209 00:04:54.698339  781906 default_sa.go:34] waiting for default service account to be created ...
	I1209 00:04:54.701422  781906 default_sa.go:45] found service account: "default"
	I1209 00:04:54.701445  781906 default_sa.go:55] duration metric: took 3.096713ms for default service account to be created ...
	I1209 00:04:54.701457  781906 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 00:04:54.707479  781906 system_pods.go:86] 8 kube-system pods found
	I1209 00:04:54.707516  781906 system_pods.go:89] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.707526  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.707536  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 00:04:54.707549  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:54.707559  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:54.707571  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:54.707582  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:54.707588  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending
	I1209 00:04:54.707629  781906 retry.go:31] will retry after 214.747702ms: missing components: kube-dns, kube-proxy
	I1209 00:04:54.737737  781906 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-474683" context rescaled to 1 replicas
	I1209 00:04:54.929957  781906 system_pods.go:86] 8 kube-system pods found
	I1209 00:04:54.930008  781906 system_pods.go:89] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.930017  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:54.930035  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running
	I1209 00:04:54.930043  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:54.930052  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:54.930057  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:54.930062  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:54.930068  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 00:04:54.930087  781906 retry.go:31] will retry after 268.492274ms: missing components: kube-dns, kube-proxy
	I1209 00:04:55.208622  781906 system_pods.go:86] 8 kube-system pods found
	I1209 00:04:55.208659  781906 system_pods.go:89] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:55.208667  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:55.208673  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running
	I1209 00:04:55.208679  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:55.208685  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:55.208692  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:55.208699  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:55.208706  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 00:04:55.208731  781906 retry.go:31] will retry after 408.177628ms: missing components: kube-dns, kube-proxy
	I1209 00:04:55.623746  781906 system_pods.go:86] 8 kube-system pods found
	I1209 00:04:55.623787  781906 system_pods.go:89] "coredns-66bc5c9577-g6cr9" [3c499af4-e0ee-43f6-ae09-571ebf9b6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:55.623799  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:55.623806  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running
	I1209 00:04:55.623813  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:55.623824  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 00:04:55.623831  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 00:04:55.623841  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:55.623850  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 00:04:55.623877  781906 retry.go:31] will retry after 535.312228ms: missing components: kube-dns, kube-proxy
	I1209 00:04:56.163880  781906 system_pods.go:86] 7 kube-system pods found
	I1209 00:04:56.163912  781906 system_pods.go:89] "coredns-66bc5c9577-x9bsg" [23bd5f5b-ee7b-4635-93df-0ecf38c174fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 00:04:56.163918  781906 system_pods.go:89] "etcd-auto-474683" [38980ded-6b61-47f6-bb96-3d60315969c4] Running
	I1209 00:04:56.163925  781906 system_pods.go:89] "kube-apiserver-auto-474683" [52afa9e2-ff71-47d7-8026-e78dee5e4f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 00:04:56.163928  781906 system_pods.go:89] "kube-controller-manager-auto-474683" [98925305-0c18-42e3-9d1c-86b92d983b3c] Running
	I1209 00:04:56.163933  781906 system_pods.go:89] "kube-proxy-mt2ql" [250e413e-9e22-4695-a251-cf1db58ce41c] Running
	I1209 00:04:56.163937  781906 system_pods.go:89] "kube-scheduler-auto-474683" [d0802f3e-2048-4ed3-aa48-b727fc64b2b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 00:04:56.163940  781906 system_pods.go:89] "storage-provisioner" [73de640c-4d2b-4fb3-b3fd-c9fa5c932d5f] Running
	I1209 00:04:56.163949  781906 system_pods.go:126] duration metric: took 1.462485333s to wait for k8s-apps to be running ...
	I1209 00:04:56.163956  781906 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 00:04:56.164004  781906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 00:04:56.181297  781906 system_svc.go:56] duration metric: took 17.33099ms WaitForService to wait for kubelet
	I1209 00:04:56.181327  781906 kubeadm.go:587] duration metric: took 2.599558559s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 00:04:56.181344  781906 node_conditions.go:102] verifying NodePressure condition ...
	I1209 00:04:56.185284  781906 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 00:04:56.185311  781906 node_conditions.go:123] node cpu capacity is 2
	I1209 00:04:56.185326  781906 node_conditions.go:105] duration metric: took 3.976776ms to run NodePressure ...
	I1209 00:04:56.185338  781906 start.go:242] waiting for startup goroutines ...
	I1209 00:04:56.185346  781906 start.go:247] waiting for cluster config update ...
	I1209 00:04:56.185381  781906 start.go:256] writing updated cluster config ...
	I1209 00:04:56.185704  781906 ssh_runner.go:195] Run: rm -f paused
	I1209 00:04:56.190630  781906 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 00:04:56.193744  781906 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x9bsg" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 00:04:58.200682  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:04:58.645576  782623 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 00:04:58.645814  782623 start.go:159] libmachine.API.Create for "calico-474683" (driver="kvm2")
	I1209 00:04:58.645863  782623 client.go:173] LocalClient.Create starting
	I1209 00:04:58.645978  782623 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem
	I1209 00:04:58.646027  782623 main.go:143] libmachine: Decoding PEM data...
	I1209 00:04:58.646053  782623 main.go:143] libmachine: Parsing certificate...
	I1209 00:04:58.646145  782623 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem
	I1209 00:04:58.646175  782623 main.go:143] libmachine: Decoding PEM data...
	I1209 00:04:58.646192  782623 main.go:143] libmachine: Parsing certificate...
	I1209 00:04:58.646598  782623 main.go:143] libmachine: creating domain...
	I1209 00:04:58.646612  782623 main.go:143] libmachine: creating network...
	I1209 00:04:58.648301  782623 main.go:143] libmachine: found existing default network
	I1209 00:04:58.648588  782623 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 00:04:58.649499  782623 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:93:61:94} reservation:<nil>}
	I1209 00:04:58.650887  782623 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cb1090}
	I1209 00:04:58.650993  782623 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-calico-474683</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 00:04:58.657136  782623 main.go:143] libmachine: creating private network mk-calico-474683 192.168.50.0/24...
	I1209 00:04:58.734111  782623 main.go:143] libmachine: private network mk-calico-474683 192.168.50.0/24 created
	I1209 00:04:58.734438  782623 main.go:143] libmachine: <network>
	  <name>mk-calico-474683</name>
	  <uuid>e4268c5f-78c9-4dbc-9747-b885c11a1ce2</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:d0:ae:9e'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 00:04:58.734477  782623 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683 ...
	I1209 00:04:58.734507  782623 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22075-744871/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1209 00:04:58.734525  782623 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22075-744871/.minikube
	I1209 00:04:58.734599  782623 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22075-744871/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22075-744871/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1209 00:04:59.025400  782623 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa...
	I1209 00:04:57.279303  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.280089  782285 main.go:143] libmachine: domain kindnet-474683 has current primary IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.280108  782285 main.go:143] libmachine: found domain IP: 192.168.72.143
	I1209 00:04:57.280115  782285 main.go:143] libmachine: reserving static IP address...
	I1209 00:04:57.280589  782285 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-474683", mac: "52:54:00:30:63:fa", ip: "192.168.72.143"} in network mk-kindnet-474683
	I1209 00:04:57.512095  782285 main.go:143] libmachine: reserved static IP address 192.168.72.143 for domain kindnet-474683
	I1209 00:04:57.512125  782285 main.go:143] libmachine: waiting for SSH...
	I1209 00:04:57.512133  782285 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 00:04:57.515285  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.515860  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.515891  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.516112  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:57.516396  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:57.516408  782285 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 00:04:57.619077  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:04:57.619504  782285 main.go:143] libmachine: domain creation complete
	I1209 00:04:57.621002  782285 machine.go:94] provisionDockerMachine start ...
	I1209 00:04:57.623506  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.623894  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.623916  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.624167  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:57.624506  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:57.624533  782285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 00:04:57.729411  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 00:04:57.729450  782285 buildroot.go:166] provisioning hostname "kindnet-474683"
	I1209 00:04:57.732617  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.732973  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.732997  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.733153  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:57.733355  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:57.733389  782285 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-474683 && echo "kindnet-474683" | sudo tee /etc/hostname
	I1209 00:04:57.848586  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-474683
	
	I1209 00:04:57.851988  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.852466  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.852501  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.852683  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:57.852932  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:57.852948  782285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-474683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-474683/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-474683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 00:04:57.962152  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:04:57.962184  782285 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1209 00:04:57.962233  782285 buildroot.go:174] setting up certificates
	I1209 00:04:57.962259  782285 provision.go:84] configureAuth start
	I1209 00:04:57.965602  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.966086  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.966111  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.968472  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.968785  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.968810  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.968936  782285 provision.go:143] copyHostCerts
	I1209 00:04:57.968984  782285 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem, removing ...
	I1209 00:04:57.968994  782285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem
	I1209 00:04:57.969061  782285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1209 00:04:57.969166  782285 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem, removing ...
	I1209 00:04:57.969184  782285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem
	I1209 00:04:57.969212  782285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1209 00:04:57.969284  782285 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem, removing ...
	I1209 00:04:57.969291  782285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem
	I1209 00:04:57.969324  782285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1209 00:04:57.969440  782285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.kindnet-474683 san=[127.0.0.1 192.168.72.143 kindnet-474683 localhost minikube]
	I1209 00:04:57.989183  782285 provision.go:177] copyRemoteCerts
	I1209 00:04:57.989239  782285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 00:04:57.991512  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.991817  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:57.991858  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:57.992004  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:04:58.072806  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1209 00:04:58.105049  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 00:04:58.134582  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 00:04:58.163781  782285 provision.go:87] duration metric: took 201.506786ms to configureAuth
	I1209 00:04:58.163811  782285 buildroot.go:189] setting minikube options for container-runtime
	I1209 00:04:58.164001  782285 config.go:182] Loaded profile config "kindnet-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:04:58.167196  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.167606  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.167630  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.167871  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:58.168075  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:58.168092  782285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 00:04:58.402389  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 00:04:58.402428  782285 machine.go:97] duration metric: took 781.407625ms to provisionDockerMachine
	I1209 00:04:58.402441  782285 client.go:176] duration metric: took 21.341493812s to LocalClient.Create
	I1209 00:04:58.402460  782285 start.go:167] duration metric: took 21.341590734s to libmachine.API.Create "kindnet-474683"
	I1209 00:04:58.402468  782285 start.go:293] postStartSetup for "kindnet-474683" (driver="kvm2")
	I1209 00:04:58.402477  782285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 00:04:58.402559  782285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 00:04:58.405659  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.406131  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.406159  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.406358  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:04:58.488461  782285 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 00:04:58.493227  782285 info.go:137] Remote host: Buildroot 2025.02
	I1209 00:04:58.493252  782285 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1209 00:04:58.493348  782285 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1209 00:04:58.493468  782285 filesync.go:149] local asset: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem -> 7489302.pem in /etc/ssl/certs
	I1209 00:04:58.493614  782285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 00:04:58.506485  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:04:58.537700  782285 start.go:296] duration metric: took 135.217969ms for postStartSetup
	I1209 00:04:58.541086  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.541675  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.541716  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.542020  782285 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/config.json ...
	I1209 00:04:58.542199  782285 start.go:128] duration metric: took 21.486120725s to createHost
	I1209 00:04:58.544309  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.544748  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.544771  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.544913  782285 main.go:143] libmachine: Using SSH client type: native
	I1209 00:04:58.545108  782285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I1209 00:04:58.545118  782285 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 00:04:58.643113  782285 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765238698.610252562
	
	I1209 00:04:58.643136  782285 fix.go:216] guest clock: 1765238698.610252562
	I1209 00:04:58.643146  782285 fix.go:229] Guest: 2025-12-09 00:04:58.610252562 +0000 UTC Remote: 2025-12-09 00:04:58.542211705 +0000 UTC m=+34.151197884 (delta=68.040857ms)
	I1209 00:04:58.643169  782285 fix.go:200] guest clock delta is within tolerance: 68.040857ms
	I1209 00:04:58.643175  782285 start.go:83] releasing machines lock for "kindnet-474683", held for 21.587322637s
	I1209 00:04:58.646600  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.646999  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.647032  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.647609  782285 ssh_runner.go:195] Run: cat /version.json
	I1209 00:04:58.647690  782285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 00:04:58.651064  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.651405  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.651499  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.651525  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.651708  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:04:58.652030  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:04:58.652055  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:04:58.652265  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:04:58.734021  782285 ssh_runner.go:195] Run: systemctl --version
	I1209 00:04:58.759150  782285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 00:04:58.915862  782285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 00:04:58.922295  782285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 00:04:58.922392  782285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 00:04:58.942570  782285 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 00:04:58.942599  782285 start.go:496] detecting cgroup driver to use...
	I1209 00:04:58.942672  782285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 00:04:58.963078  782285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 00:04:58.981260  782285 docker.go:218] disabling cri-docker service (if available) ...
	I1209 00:04:58.981338  782285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 00:04:59.005693  782285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 00:04:59.022252  782285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 00:04:59.178217  782285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 00:04:59.402267  782285 docker.go:234] disabling docker service ...
	I1209 00:04:59.402378  782285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 00:04:59.418525  782285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 00:04:59.438151  782285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 00:04:59.597807  782285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 00:04:59.746317  782285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 00:04:59.762484  782285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 00:04:59.784025  782285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 00:04:59.784090  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.798846  782285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 00:04:59.798922  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.815161  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.829320  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.842181  782285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 00:04:59.855393  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.869033  782285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.891395  782285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:04:59.903321  782285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 00:04:59.913383  782285 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 00:04:59.913448  782285 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 00:04:59.938818  782285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 00:04:59.953569  782285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:00.111095  782285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 00:05:00.234005  782285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 00:05:00.234093  782285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 00:05:00.239858  782285 start.go:564] Will wait 60s for crictl version
	I1209 00:05:00.239918  782285 ssh_runner.go:195] Run: which crictl
	I1209 00:05:00.244120  782285 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 00:05:00.282690  782285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 00:05:00.282779  782285 ssh_runner.go:195] Run: crio --version
	I1209 00:05:00.315622  782285 ssh_runner.go:195] Run: crio --version
	I1209 00:05:00.350420  782285 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1209 00:04:55.552247  782653 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:04:55.552306  782653 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 00:04:55.552318  782653 cache.go:65] Caching tarball of preloaded images
	I1209 00:04:55.552424  782653 preload.go:238] Found /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 00:04:55.552435  782653 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 00:04:55.552555  782653 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/config.json ...
	I1209 00:04:55.552768  782653 start.go:360] acquireMachinesLock for pause-165880: {Name:mk9f5a36f0f03c819637fd3ede2b02dca808c533 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	W1209 00:05:00.200840  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:02.201786  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:04:59.130668  782623 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/calico-474683.rawdisk...
	I1209 00:04:59.130714  782623 main.go:143] libmachine: Writing magic tar header
	I1209 00:04:59.130739  782623 main.go:143] libmachine: Writing SSH key tar header
	I1209 00:04:59.130827  782623 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683 ...
	I1209 00:04:59.130895  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683
	I1209 00:04:59.130934  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683 (perms=drwx------)
	I1209 00:04:59.130954  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube/machines
	I1209 00:04:59.130964  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube/machines (perms=drwxr-xr-x)
	I1209 00:04:59.130976  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871/.minikube
	I1209 00:04:59.130987  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871/.minikube (perms=drwxr-xr-x)
	I1209 00:04:59.130996  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22075-744871
	I1209 00:04:59.131006  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22075-744871 (perms=drwxrwxr-x)
	I1209 00:04:59.131018  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1209 00:04:59.131028  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 00:04:59.131038  782623 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1209 00:04:59.131048  782623 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 00:04:59.131056  782623 main.go:143] libmachine: checking permissions on dir: /home
	I1209 00:04:59.131073  782623 main.go:143] libmachine: skipping /home - not owner
	I1209 00:04:59.131079  782623 main.go:143] libmachine: defining domain...
	I1209 00:04:59.132581  782623 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>calico-474683</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/calico-474683.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-calico-474683'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1209 00:04:59.137749  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:02:f8:72 in network default
	I1209 00:04:59.138442  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:04:59.138461  782623 main.go:143] libmachine: starting domain...
	I1209 00:04:59.138466  782623 main.go:143] libmachine: ensuring networks are active...
	I1209 00:04:59.139517  782623 main.go:143] libmachine: Ensuring network default is active
	I1209 00:04:59.140024  782623 main.go:143] libmachine: Ensuring network mk-calico-474683 is active
	I1209 00:04:59.140766  782623 main.go:143] libmachine: getting domain XML...
	I1209 00:04:59.142117  782623 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>calico-474683</name>
	  <uuid>c724cfe1-c4be-40af-b04e-123a40e05065</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/calico-474683.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:87:7e:f5'/>
	      <source network='mk-calico-474683'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:02:f8:72'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1209 00:05:00.577655  782623 main.go:143] libmachine: waiting for domain to start...
	I1209 00:05:00.579096  782623 main.go:143] libmachine: domain is now running
	I1209 00:05:00.579113  782623 main.go:143] libmachine: waiting for IP...
	I1209 00:05:00.580060  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:00.580932  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:00.580947  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:00.581483  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:00.581532  782623 retry.go:31] will retry after 310.06074ms: waiting for domain to come up
	I1209 00:05:00.893393  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:00.894222  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:00.894242  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:00.894751  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:00.894792  782623 retry.go:31] will retry after 313.144808ms: waiting for domain to come up
	I1209 00:05:01.209631  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:01.210528  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:01.210551  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:01.211011  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:01.211058  782623 retry.go:31] will retry after 485.330957ms: waiting for domain to come up
	I1209 00:05:01.697767  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:01.698945  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:01.698991  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:01.699516  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:01.699570  782623 retry.go:31] will retry after 607.257691ms: waiting for domain to come up
	I1209 00:05:02.308576  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:02.309591  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:02.309637  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:02.310192  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:02.310269  782623 retry.go:31] will retry after 604.798902ms: waiting for domain to come up
	I1209 00:05:02.917437  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:02.918181  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:02.918220  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:02.918826  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:02.918879  782623 retry.go:31] will retry after 781.854699ms: waiting for domain to come up
	I1209 00:05:03.702766  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:03.703453  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:03.703474  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:03.703818  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:03.703868  782623 retry.go:31] will retry after 729.916129ms: waiting for domain to come up
	I1209 00:05:00.355750  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:00.356319  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:05:00.356345  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:00.356588  782285 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 00:05:00.361897  782285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 00:05:00.378374  782285 kubeadm.go:884] updating cluster {Name:kindnet-474683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:kindnet-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 00:05:00.378645  782285 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:05:00.378740  782285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:00.416483  782285 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1209 00:05:00.416572  782285 ssh_runner.go:195] Run: which lz4
	I1209 00:05:00.421532  782285 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 00:05:00.426517  782285 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 00:05:00.426552  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1209 00:05:01.730449  782285 crio.go:462] duration metric: took 1.308972968s to copy over tarball
	I1209 00:05:01.730555  782285 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 00:05:03.325802  782285 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.595199525s)
	I1209 00:05:03.325852  782285 crio.go:469] duration metric: took 1.595364014s to extract the tarball
	I1209 00:05:03.325862  782285 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 00:05:03.366821  782285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:03.407596  782285 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 00:05:03.407635  782285 cache_images.go:86] Images are preloaded, skipping loading
	I1209 00:05:03.407649  782285 kubeadm.go:935] updating node { 192.168.72.143 8443 v1.34.2 crio true true} ...
	I1209 00:05:03.407793  782285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-474683 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1209 00:05:03.407899  782285 ssh_runner.go:195] Run: crio config
	I1209 00:05:03.457353  782285 cni.go:84] Creating CNI manager for "kindnet"
	I1209 00:05:03.457411  782285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 00:05:03.457448  782285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.143 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-474683 NodeName:kindnet-474683 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 00:05:03.457609  782285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-474683"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.143"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.143"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 00:05:03.457675  782285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 00:05:03.471317  782285 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 00:05:03.471444  782285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 00:05:03.487451  782285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1209 00:05:03.515198  782285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 00:05:03.536443  782285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1209 00:05:03.557067  782285 ssh_runner.go:195] Run: grep 192.168.72.143	control-plane.minikube.internal$ /etc/hosts
	I1209 00:05:03.561548  782285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 00:05:03.577043  782285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:03.768934  782285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 00:05:03.802110  782285 certs.go:69] Setting up /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683 for IP: 192.168.72.143
	I1209 00:05:03.802138  782285 certs.go:195] generating shared ca certs ...
	I1209 00:05:03.802162  782285 certs.go:227] acquiring lock for ca certs: {Name:mk069bbba4d83d251409b18022ca36eb869d942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.802410  782285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key
	I1209 00:05:03.802455  782285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key
	I1209 00:05:03.802465  782285 certs.go:257] generating profile certs ...
	I1209 00:05:03.802525  782285 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.key
	I1209 00:05:03.802566  782285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt with IP's: []
	I1209 00:05:03.841772  782285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt ...
	I1209 00:05:03.841808  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: {Name:mkf8beacbae180036263c43894b1597797a1121c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.842044  782285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.key ...
	I1209 00:05:03.842061  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.key: {Name:mk9b0a991684bdb1b7696f637236dec087a7545a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.842177  782285 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key.9e46ac6c
	I1209 00:05:03.842201  782285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt.9e46ac6c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.143]
	I1209 00:05:03.910490  782285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt.9e46ac6c ...
	I1209 00:05:03.910529  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt.9e46ac6c: {Name:mkcae5ee52ba6232f17ee77420ed884c7f1e80b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.910747  782285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key.9e46ac6c ...
	I1209 00:05:03.910771  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key.9e46ac6c: {Name:mke546fbdbd6a15cfd24cf1c2dded658b8c332f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:03.910888  782285 certs.go:382] copying /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt.9e46ac6c -> /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt
	I1209 00:05:03.910990  782285 certs.go:386] copying /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key.9e46ac6c -> /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key
	I1209 00:05:03.911076  782285 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.key
	I1209 00:05:03.911099  782285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.crt with IP's: []
	I1209 00:05:04.073065  782285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.crt ...
	I1209 00:05:04.073099  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.crt: {Name:mk6b32511d664f4912c3c1309d42e491a99b7423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:04.073306  782285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.key ...
	I1209 00:05:04.073333  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.key: {Name:mk3dbcd8046995b2b44d3da48d1bce0bdb71117a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:04.073558  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem (1338 bytes)
	W1209 00:05:04.073605  782285 certs.go:480] ignoring /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930_empty.pem, impossibly tiny 0 bytes
	I1209 00:05:04.073613  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 00:05:04.073639  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem (1082 bytes)
	I1209 00:05:04.073661  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem (1123 bytes)
	I1209 00:05:04.073684  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem (1675 bytes)
	I1209 00:05:04.073720  782285 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:04.074379  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 00:05:04.109105  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 00:05:04.146921  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 00:05:04.176689  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 00:05:04.210729  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 00:05:04.248785  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 00:05:04.284560  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 00:05:04.319642  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 00:05:04.354931  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 00:05:04.387335  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem --> /usr/share/ca-certificates/748930.pem (1338 bytes)
	I1209 00:05:04.420779  782285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /usr/share/ca-certificates/7489302.pem (1708 bytes)
	I1209 00:05:04.450495  782285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 00:05:04.479190  782285 ssh_runner.go:195] Run: openssl version
	I1209 00:05:04.486299  782285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:04.499696  782285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 00:05:04.513628  782285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:04.520151  782285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:04.520230  782285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:04.527977  782285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 00:05:04.540351  782285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 00:05:04.552407  782285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/748930.pem
	I1209 00:05:04.563990  782285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/748930.pem /etc/ssl/certs/748930.pem
	I1209 00:05:04.576544  782285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748930.pem
	I1209 00:05:04.581867  782285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 23:15 /usr/share/ca-certificates/748930.pem
	I1209 00:05:04.581941  782285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748930.pem
	I1209 00:05:04.589102  782285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 00:05:04.600785  782285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/748930.pem /etc/ssl/certs/51391683.0
	I1209 00:05:04.614705  782285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7489302.pem
	I1209 00:05:04.625900  782285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7489302.pem /etc/ssl/certs/7489302.pem
	I1209 00:05:04.637169  782285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7489302.pem
	I1209 00:05:04.642228  782285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 23:15 /usr/share/ca-certificates/7489302.pem
	I1209 00:05:04.642297  782285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7489302.pem
	I1209 00:05:04.649947  782285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 00:05:04.661599  782285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7489302.pem /etc/ssl/certs/3ec20f2e.0
	I1209 00:05:04.673261  782285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 00:05:04.678504  782285 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 00:05:04.678567  782285 kubeadm.go:401] StartCluster: {Name:kindnet-474683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:kindnet-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:05:04.678647  782285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 00:05:04.678722  782285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 00:05:04.714521  782285 cri.go:89] found id: ""
	I1209 00:05:04.714606  782285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 00:05:04.726557  782285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 00:05:04.739557  782285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 00:05:04.752755  782285 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 00:05:04.752779  782285 kubeadm.go:158] found existing configuration files:
	
	I1209 00:05:04.752843  782285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 00:05:04.766955  782285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 00:05:04.767016  782285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 00:05:04.779536  782285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 00:05:04.792061  782285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 00:05:04.792126  782285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 00:05:04.804256  782285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 00:05:04.815512  782285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 00:05:04.815592  782285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 00:05:04.828012  782285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 00:05:04.838849  782285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 00:05:04.838931  782285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 00:05:04.850356  782285 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 00:05:04.897812  782285 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 00:05:04.897879  782285 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 00:05:04.992038  782285 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 00:05:04.992214  782285 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 00:05:04.992389  782285 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 00:05:05.002318  782285 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1209 00:05:04.701606  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:07.201154  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:04.436043  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:04.437097  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:04.437122  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:04.437650  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:04.437713  782623 retry.go:31] will retry after 1.093672032s: waiting for domain to come up
	I1209 00:05:05.533103  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:05.533822  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:05.533844  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:05.534236  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:05.534276  782623 retry.go:31] will retry after 1.405536599s: waiting for domain to come up
	I1209 00:05:06.942037  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:06.942883  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:06.942921  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:06.943412  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:06.943468  782623 retry.go:31] will retry after 1.43839653s: waiting for domain to come up
	I1209 00:05:08.383306  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:08.383933  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:08.383950  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:08.384316  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:08.384356  782623 retry.go:31] will retry after 2.211169168s: waiting for domain to come up
	I1209 00:05:05.006499  782285 out.go:252]   - Generating certificates and keys ...
	I1209 00:05:05.006585  782285 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 00:05:05.006684  782285 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 00:05:05.126718  782285 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 00:05:05.251499  782285 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 00:05:05.325788  782285 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 00:05:05.682856  782285 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 00:05:06.138114  782285 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 00:05:06.138260  782285 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-474683 localhost] and IPs [192.168.72.143 127.0.0.1 ::1]
	I1209 00:05:06.441765  782285 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 00:05:06.442672  782285 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-474683 localhost] and IPs [192.168.72.143 127.0.0.1 ::1]
	I1209 00:05:06.561232  782285 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 00:05:07.489279  782285 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 00:05:07.663920  782285 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 00:05:07.664180  782285 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 00:05:08.034913  782285 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 00:05:08.897713  782285 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 00:05:09.097089  782285 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 00:05:09.359985  782285 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 00:05:09.482574  782285 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 00:05:09.483278  782285 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 00:05:09.485757  782285 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1209 00:05:09.700856  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:12.200649  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:10.597522  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:10.598216  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:10.598236  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:10.598702  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:10.598748  782623 retry.go:31] will retry after 3.491313112s: waiting for domain to come up
	I1209 00:05:09.487394  782285 out.go:252]   - Booting up control plane ...
	I1209 00:05:09.487539  782285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 00:05:09.487650  782285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 00:05:09.487814  782285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 00:05:09.511914  782285 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 00:05:09.512095  782285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 00:05:09.519838  782285 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 00:05:09.520280  782285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 00:05:09.520496  782285 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 00:05:09.726767  782285 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 00:05:09.726940  782285 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 00:05:10.228007  782285 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.618996ms
	I1209 00:05:10.233314  782285 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 00:05:10.233477  782285 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.143:8443/livez
	I1209 00:05:10.233605  782285 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 00:05:10.233699  782285 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 00:05:13.028668  782285 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.797197022s
	I1209 00:05:14.088109  782285 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.857452147s
	I1209 00:05:15.731591  782285 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501678478s
	I1209 00:05:15.751587  782285 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 00:05:15.771624  782285 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 00:05:15.787433  782285 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 00:05:15.787663  782285 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-474683 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 00:05:15.797968  782285 kubeadm.go:319] [bootstrap-token] Using token: 5wug0n.476zgzlpe1a8r7t2
	I1209 00:05:15.800240  782285 out.go:252]   - Configuring RBAC rules ...
	I1209 00:05:15.800383  782285 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 00:05:15.803844  782285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 00:05:15.810173  782285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 00:05:15.813775  782285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 00:05:15.817473  782285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 00:05:15.822760  782285 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 00:05:16.137722  782285 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 00:05:16.585958  782285 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 00:05:17.137495  782285 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 00:05:17.138255  782285 kubeadm.go:319] 
	I1209 00:05:17.138382  782285 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 00:05:17.138397  782285 kubeadm.go:319] 
	I1209 00:05:17.138519  782285 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 00:05:17.138532  782285 kubeadm.go:319] 
	I1209 00:05:17.138568  782285 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 00:05:17.138685  782285 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 00:05:17.138765  782285 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 00:05:17.138782  782285 kubeadm.go:319] 
	I1209 00:05:17.138875  782285 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 00:05:17.138901  782285 kubeadm.go:319] 
	I1209 00:05:17.138977  782285 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 00:05:17.138989  782285 kubeadm.go:319] 
	I1209 00:05:17.139062  782285 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 00:05:17.139166  782285 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 00:05:17.139222  782285 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 00:05:17.139228  782285 kubeadm.go:319] 
	I1209 00:05:17.139305  782285 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 00:05:17.139381  782285 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 00:05:17.139388  782285 kubeadm.go:319] 
	I1209 00:05:17.139452  782285 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5wug0n.476zgzlpe1a8r7t2 \
	I1209 00:05:17.139547  782285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b505ea1d51a5916e1e34daedc053d9e1cdc4c18fb7af3859a1471c943bb62a6a \
	I1209 00:05:17.139589  782285 kubeadm.go:319] 	--control-plane 
	I1209 00:05:17.139599  782285 kubeadm.go:319] 
	I1209 00:05:17.139712  782285 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 00:05:17.139721  782285 kubeadm.go:319] 
	I1209 00:05:17.139803  782285 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5wug0n.476zgzlpe1a8r7t2 \
	I1209 00:05:17.139899  782285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b505ea1d51a5916e1e34daedc053d9e1cdc4c18fb7af3859a1471c943bb62a6a 
	I1209 00:05:17.141470  782285 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 00:05:17.141514  782285 cni.go:84] Creating CNI manager for "kindnet"
	I1209 00:05:17.143711  782285 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1209 00:05:14.699588  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:16.700215  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:14.092135  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:14.092915  782623 main.go:143] libmachine: no network interface addresses found for domain calico-474683 (source=lease)
	I1209 00:05:14.092935  782623 main.go:143] libmachine: trying to list again with source=arp
	I1209 00:05:14.093404  782623 main.go:143] libmachine: unable to find current IP address of domain calico-474683 in network mk-calico-474683 (interfaces detected: [])
	I1209 00:05:14.093442  782623 retry.go:31] will retry after 3.91631774s: waiting for domain to come up
	I1209 00:05:18.011206  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.012145  782623 main.go:143] libmachine: domain calico-474683 has current primary IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.012166  782623 main.go:143] libmachine: found domain IP: 192.168.50.66
	I1209 00:05:18.012192  782623 main.go:143] libmachine: reserving static IP address...
	I1209 00:05:18.012668  782623 main.go:143] libmachine: unable to find host DHCP lease matching {name: "calico-474683", mac: "52:54:00:87:7e:f5", ip: "192.168.50.66"} in network mk-calico-474683
	I1209 00:05:18.315130  782623 main.go:143] libmachine: reserved static IP address 192.168.50.66 for domain calico-474683
	I1209 00:05:18.315161  782623 main.go:143] libmachine: waiting for SSH...
	I1209 00:05:18.315169  782623 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 00:05:18.318822  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.319494  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.319537  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.319768  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:18.320102  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:18.320120  782623 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 00:05:18.425256  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:05:18.425684  782623 main.go:143] libmachine: domain creation complete
	I1209 00:05:18.427342  782623 machine.go:94] provisionDockerMachine start ...
	I1209 00:05:18.430074  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.430461  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.430485  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.430724  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:18.430974  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:18.430988  782623 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 00:05:18.535768  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 00:05:18.535798  782623 buildroot.go:166] provisioning hostname "calico-474683"
	I1209 00:05:18.538984  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.539405  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.539431  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.539643  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:18.539912  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:18.539926  782623 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-474683 && echo "calico-474683" | sudo tee /etc/hostname
	I1209 00:05:18.667176  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-474683
	
	I1209 00:05:18.670783  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.671220  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.671245  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.671448  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:18.671687  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:18.671704  782623 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-474683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-474683/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-474683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 00:05:18.788084  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:05:18.788128  782623 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1209 00:05:18.788162  782623 buildroot.go:174] setting up certificates
	I1209 00:05:18.788177  782623 provision.go:84] configureAuth start
	I1209 00:05:18.791396  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.791903  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.791962  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.794511  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.794848  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.794867  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.794973  782623 provision.go:143] copyHostCerts
	I1209 00:05:18.795030  782623 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem, removing ...
	I1209 00:05:18.795040  782623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem
	I1209 00:05:18.795108  782623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1209 00:05:18.795195  782623 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem, removing ...
	I1209 00:05:18.795202  782623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem
	I1209 00:05:18.795227  782623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1209 00:05:18.795286  782623 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem, removing ...
	I1209 00:05:18.795293  782623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem
	I1209 00:05:18.795314  782623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1209 00:05:18.795373  782623 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.calico-474683 san=[127.0.0.1 192.168.50.66 calico-474683 localhost minikube]
	I1209 00:05:18.988521  782623 provision.go:177] copyRemoteCerts
	I1209 00:05:18.988585  782623 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 00:05:18.991720  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.992112  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:18.992143  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:18.992297  782623 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa Username:docker}
	I1209 00:05:17.144825  782285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 00:05:17.150494  782285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 00:05:17.150515  782285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 00:05:17.177487  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 00:05:17.444475  782285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 00:05:17.444572  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:17.444595  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-474683 minikube.k8s.io/updated_at=2025_12_09T00_05_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2846307350d09469fc6b6b47dd0c4837fa740d9c minikube.k8s.io/name=kindnet-474683 minikube.k8s.io/primary=true
	I1209 00:05:17.472705  782285 ops.go:34] apiserver oom_adj: -16
	I1209 00:05:17.552826  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:18.053613  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:18.553273  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:19.053157  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:19.665568  782653 start.go:364] duration metric: took 24.112756677s to acquireMachinesLock for "pause-165880"
	I1209 00:05:19.665613  782653 start.go:96] Skipping create...Using existing machine configuration
	I1209 00:05:19.665627  782653 fix.go:54] fixHost starting: 
	I1209 00:05:19.668343  782653 fix.go:112] recreateIfNeeded on pause-165880: state=Running err=<nil>
	W1209 00:05:19.668411  782653 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 00:05:19.670388  782653 out.go:252] * Updating the running kvm2 "pause-165880" VM ...
	I1209 00:05:19.670427  782653 machine.go:94] provisionDockerMachine start ...
	I1209 00:05:19.674341  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.674886  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.674928  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.675273  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.675624  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.675652  782653 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 00:05:19.791731  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-165880
	
	I1209 00:05:19.791773  782653 buildroot.go:166] provisioning hostname "pause-165880"
	I1209 00:05:19.795624  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.796205  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.796234  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.796514  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.796747  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.796759  782653 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-165880 && echo "pause-165880" | sudo tee /etc/hostname
	I1209 00:05:19.936746  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-165880
	
	I1209 00:05:19.940045  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.940462  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:19.940493  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:19.940654  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.940846  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:19.940860  782653 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-165880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-165880/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-165880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 00:05:20.060582  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 00:05:20.060614  782653 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22075-744871/.minikube CaCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22075-744871/.minikube}
	I1209 00:05:20.060650  782653 buildroot.go:174] setting up certificates
	I1209 00:05:20.060664  782653 provision.go:84] configureAuth start
	I1209 00:05:20.065295  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.066045  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.066090  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.069288  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.069780  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.069809  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.070050  782653 provision.go:143] copyHostCerts
	I1209 00:05:20.070117  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem, removing ...
	I1209 00:05:20.070131  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem
	I1209 00:05:20.070204  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/ca.pem (1082 bytes)
	I1209 00:05:20.070358  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem, removing ...
	I1209 00:05:20.070393  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem
	I1209 00:05:20.070432  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/cert.pem (1123 bytes)
	I1209 00:05:20.070548  782653 exec_runner.go:144] found /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem, removing ...
	I1209 00:05:20.070561  782653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem
	I1209 00:05:20.070599  782653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22075-744871/.minikube/key.pem (1675 bytes)
	I1209 00:05:20.070687  782653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem org=jenkins.pause-165880 san=[127.0.0.1 192.168.83.217 localhost minikube pause-165880]
	I1209 00:05:20.171275  782653 provision.go:177] copyRemoteCerts
	I1209 00:05:20.171338  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 00:05:20.174350  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.174927  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.174953  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.175169  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:20.271573  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 00:05:20.314206  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1209 00:05:20.346866  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 00:05:20.384460  782653 provision.go:87] duration metric: took 323.774611ms to configureAuth
	I1209 00:05:20.384496  782653 buildroot.go:189] setting minikube options for container-runtime
	I1209 00:05:20.384810  782653 config.go:182] Loaded profile config "pause-165880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:05:20.387997  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.388483  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:20.388520  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:20.388698  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:20.388903  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:20.388917  782653 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 00:05:19.075354  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 00:05:19.104938  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 00:05:19.140399  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 00:05:19.169734  782623 provision.go:87] duration metric: took 381.538879ms to configureAuth
	I1209 00:05:19.169770  782623 buildroot.go:189] setting minikube options for container-runtime
	I1209 00:05:19.170004  782623 config.go:182] Loaded profile config "calico-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:05:19.173022  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.173467  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.173490  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.173695  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.173924  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:19.173943  782623 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 00:05:19.411888  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 00:05:19.411928  782623 machine.go:97] duration metric: took 984.566314ms to provisionDockerMachine
	I1209 00:05:19.411944  782623 client.go:176] duration metric: took 20.766069419s to LocalClient.Create
	I1209 00:05:19.411968  782623 start.go:167] duration metric: took 20.766154983s to libmachine.API.Create "calico-474683"
	I1209 00:05:19.411979  782623 start.go:293] postStartSetup for "calico-474683" (driver="kvm2")
	I1209 00:05:19.411994  782623 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 00:05:19.412087  782623 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 00:05:19.415350  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.415803  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.415831  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.415996  782623 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa Username:docker}
	I1209 00:05:19.499356  782623 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 00:05:19.504238  782623 info.go:137] Remote host: Buildroot 2025.02
	I1209 00:05:19.504276  782623 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1209 00:05:19.504351  782623 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1209 00:05:19.504469  782623 filesync.go:149] local asset: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem -> 7489302.pem in /etc/ssl/certs
	I1209 00:05:19.504593  782623 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 00:05:19.516545  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:19.547605  782623 start.go:296] duration metric: took 135.602984ms for postStartSetup
	I1209 00:05:19.551043  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.551592  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.551636  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.551909  782623 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/config.json ...
	I1209 00:05:19.552130  782623 start.go:128] duration metric: took 20.908623666s to createHost
	I1209 00:05:19.555033  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.555550  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.555583  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.555826  782623 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:19.556132  782623 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I1209 00:05:19.556153  782623 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 00:05:19.665405  782623 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765238719.637112106
	
	I1209 00:05:19.665432  782623 fix.go:216] guest clock: 1765238719.637112106
	I1209 00:05:19.665443  782623 fix.go:229] Guest: 2025-12-09 00:05:19.637112106 +0000 UTC Remote: 2025-12-09 00:05:19.55215123 +0000 UTC m=+25.556397708 (delta=84.960876ms)
	I1209 00:05:19.665462  782623 fix.go:200] guest clock delta is within tolerance: 84.960876ms
	I1209 00:05:19.665467  782623 start.go:83] releasing machines lock for "calico-474683", held for 21.022157975s
	I1209 00:05:19.669335  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.669872  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.669908  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.670603  782623 ssh_runner.go:195] Run: cat /version.json
	I1209 00:05:19.670683  782623 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 00:05:19.674184  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.674500  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.674774  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.674813  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.674997  782623 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa Username:docker}
	I1209 00:05:19.675254  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:19.675289  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:19.675531  782623 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/calico-474683/id_rsa Username:docker}
	I1209 00:05:19.778693  782623 ssh_runner.go:195] Run: systemctl --version
	I1209 00:05:19.785108  782623 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 00:05:19.946927  782623 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 00:05:19.954729  782623 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 00:05:19.954819  782623 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 00:05:19.977397  782623 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 00:05:19.977421  782623 start.go:496] detecting cgroup driver to use...
	I1209 00:05:19.977510  782623 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 00:05:19.998858  782623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 00:05:20.017676  782623 docker.go:218] disabling cri-docker service (if available) ...
	I1209 00:05:20.017743  782623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 00:05:20.038240  782623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 00:05:20.057453  782623 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 00:05:20.221815  782623 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 00:05:20.466241  782623 docker.go:234] disabling docker service ...
	I1209 00:05:20.466316  782623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 00:05:20.483841  782623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 00:05:20.500194  782623 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 00:05:20.685006  782623 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 00:05:20.835571  782623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 00:05:20.852594  782623 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 00:05:20.876595  782623 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 00:05:20.876709  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.888987  782623 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 00:05:20.889049  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.902380  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.917003  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.933201  782623 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 00:05:20.946967  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.960480  782623 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.984081  782623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:20.998395  782623 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 00:05:21.009435  782623 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 00:05:21.009511  782623 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 00:05:21.037579  782623 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 00:05:21.053001  782623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:21.233232  782623 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 00:05:21.359183  782623 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 00:05:21.359279  782623 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 00:05:21.365045  782623 start.go:564] Will wait 60s for crictl version
	I1209 00:05:21.365120  782623 ssh_runner.go:195] Run: which crictl
	I1209 00:05:21.369237  782623 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 00:05:21.402947  782623 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 00:05:21.403038  782623 ssh_runner.go:195] Run: crio --version
	I1209 00:05:21.434746  782623 ssh_runner.go:195] Run: crio --version
	I1209 00:05:21.469197  782623 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1209 00:05:19.552962  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:20.053230  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:20.553604  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:21.053765  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:21.553623  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:22.053153  782285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 00:05:22.170222  782285 kubeadm.go:1114] duration metric: took 4.725729945s to wait for elevateKubeSystemPrivileges
	I1209 00:05:22.170285  782285 kubeadm.go:403] duration metric: took 17.491717988s to StartCluster
	I1209 00:05:22.170314  782285 settings.go:142] acquiring lock: {Name:mk01a7d116accfccda14c363bded9d7c0216d454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:22.170447  782285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1209 00:05:22.172335  782285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/kubeconfig: {Name:mk0db57d03f858808a26818547681e8d59b0a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:22.172643  782285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 00:05:22.172676  782285 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 00:05:22.172798  782285 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 00:05:22.172893  782285 addons.go:70] Setting storage-provisioner=true in profile "kindnet-474683"
	I1209 00:05:22.172911  782285 addons.go:239] Setting addon storage-provisioner=true in "kindnet-474683"
	I1209 00:05:22.172932  782285 config.go:182] Loaded profile config "kindnet-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:05:22.172954  782285 host.go:66] Checking if "kindnet-474683" exists ...
	I1209 00:05:22.172983  782285 addons.go:70] Setting default-storageclass=true in profile "kindnet-474683"
	I1209 00:05:22.172996  782285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-474683"
	I1209 00:05:22.174805  782285 out.go:179] * Verifying Kubernetes components...
	I1209 00:05:22.176019  782285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:22.177140  782285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 00:05:22.178044  782285 addons.go:239] Setting addon default-storageclass=true in "kindnet-474683"
	I1209 00:05:22.178087  782285 host.go:66] Checking if "kindnet-474683" exists ...
	I1209 00:05:22.178379  782285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 00:05:22.178397  782285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 00:05:22.180538  782285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 00:05:22.180559  782285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 00:05:22.183323  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:22.184062  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:05:22.184118  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:22.184417  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:05:22.184996  782285 main.go:143] libmachine: domain kindnet-474683 has defined MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:22.186001  782285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:63:fa", ip: ""} in network mk-kindnet-474683: {Iface:virbr4 ExpiryTime:2025-12-09 01:04:54 +0000 UTC Type:0 Mac:52:54:00:30:63:fa Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:kindnet-474683 Clientid:01:52:54:00:30:63:fa}
	I1209 00:05:22.186040  782285 main.go:143] libmachine: domain kindnet-474683 has defined IP address 192.168.72.143 and MAC address 52:54:00:30:63:fa in network mk-kindnet-474683
	I1209 00:05:22.186266  782285 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/kindnet-474683/id_rsa Username:docker}
	I1209 00:05:22.433854  782285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 00:05:22.530225  782285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 00:05:22.710987  782285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 00:05:22.756262  782285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 00:05:22.877417  782285 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1209 00:05:22.878841  782285 node_ready.go:35] waiting up to 15m0s for node "kindnet-474683" to be "Ready" ...
	I1209 00:05:23.391178  782285 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-474683" context rescaled to 1 replicas
	I1209 00:05:23.598055  782285 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1209 00:05:19.200298  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:21.200565  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:23.201257  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:21.473193  782623 main.go:143] libmachine: domain calico-474683 has defined MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:21.473702  782623 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:7e:f5", ip: ""} in network mk-calico-474683: {Iface:virbr2 ExpiryTime:2025-12-09 01:05:14 +0000 UTC Type:0 Mac:52:54:00:87:7e:f5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:calico-474683 Clientid:01:52:54:00:87:7e:f5}
	I1209 00:05:21.473733  782623 main.go:143] libmachine: domain calico-474683 has defined IP address 192.168.50.66 and MAC address 52:54:00:87:7e:f5 in network mk-calico-474683
	I1209 00:05:21.473960  782623 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 00:05:21.478713  782623 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 00:05:21.494058  782623 kubeadm.go:884] updating cluster {Name:calico-474683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:calico-474683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.66 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 00:05:21.494207  782623 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:05:21.494261  782623 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:21.531065  782623 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1209 00:05:21.531149  782623 ssh_runner.go:195] Run: which lz4
	I1209 00:05:21.535690  782623 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 00:05:21.540611  782623 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 00:05:21.540649  782623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1209 00:05:23.599260  782285 addons.go:530] duration metric: took 1.426451881s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 00:05:26.003810  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 00:05:26.003841  782653 machine.go:97] duration metric: took 6.33340561s to provisionDockerMachine
	I1209 00:05:26.003854  782653 start.go:293] postStartSetup for "pause-165880" (driver="kvm2")
	I1209 00:05:26.003864  782653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 00:05:26.003941  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 00:05:26.007221  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.007720  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.007781  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.007981  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.100638  782653 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 00:05:26.105932  782653 info.go:137] Remote host: Buildroot 2025.02
	I1209 00:05:26.105968  782653 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/addons for local assets ...
	I1209 00:05:26.106049  782653 filesync.go:126] Scanning /home/jenkins/minikube-integration/22075-744871/.minikube/files for local assets ...
	I1209 00:05:26.106130  782653 filesync.go:149] local asset: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem -> 7489302.pem in /etc/ssl/certs
	I1209 00:05:26.106227  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 00:05:26.123738  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:26.167380  782653 start.go:296] duration metric: took 163.489508ms for postStartSetup
	I1209 00:05:26.167445  782653 fix.go:56] duration metric: took 6.501816173s for fixHost
	I1209 00:05:26.171923  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.172486  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.172518  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.172775  782653 main.go:143] libmachine: Using SSH client type: native
	I1209 00:05:26.173094  782653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.83.217 22 <nil> <nil>}
	I1209 00:05:26.173118  782653 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 00:05:26.293758  782653 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765238726.290651991
	
	I1209 00:05:26.293787  782653 fix.go:216] guest clock: 1765238726.290651991
	I1209 00:05:26.293797  782653 fix.go:229] Guest: 2025-12-09 00:05:26.290651991 +0000 UTC Remote: 2025-12-09 00:05:26.167452687 +0000 UTC m=+30.731624268 (delta=123.199304ms)
	I1209 00:05:26.293823  782653 fix.go:200] guest clock delta is within tolerance: 123.199304ms
	I1209 00:05:26.293829  782653 start.go:83] releasing machines lock for "pause-165880", held for 6.628237017s
	I1209 00:05:26.297200  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.297750  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.297786  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.298435  782653 ssh_runner.go:195] Run: cat /version.json
	I1209 00:05:26.298534  782653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 00:05:26.302194  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302574  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302770  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.302815  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.302991  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.303012  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:26.303153  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:26.303414  782653 sshutil.go:53] new ssh client: &{IP:192.168.83.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/pause-165880/id_rsa Username:docker}
	I1209 00:05:26.388338  782653 ssh_runner.go:195] Run: systemctl --version
	I1209 00:05:26.411503  782653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 00:05:26.564483  782653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 00:05:26.577338  782653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 00:05:26.577435  782653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 00:05:26.589629  782653 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 00:05:26.589669  782653 start.go:496] detecting cgroup driver to use...
	I1209 00:05:26.589771  782653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 00:05:26.614167  782653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 00:05:26.634398  782653 docker.go:218] disabling cri-docker service (if available) ...
	I1209 00:05:26.634551  782653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 00:05:26.655828  782653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 00:05:26.677740  782653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 00:05:26.879759  782653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 00:05:27.075050  782653 docker.go:234] disabling docker service ...
	I1209 00:05:27.075148  782653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 00:05:27.108544  782653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 00:05:27.128174  782653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 00:05:27.333496  782653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 00:05:27.527709  782653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 00:05:27.547600  782653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 00:05:27.573078  782653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 00:05:27.573176  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.591439  782653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 00:05:27.591536  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.610214  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.624565  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.637537  782653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 00:05:27.652581  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.667490  782653 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.683625  782653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 00:05:27.699870  782653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 00:05:27.713298  782653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 00:05:27.726610  782653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:27.922280  782653 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 00:05:28.154862  782653 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 00:05:28.154956  782653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 00:05:28.160673  782653 start.go:564] Will wait 60s for crictl version
	I1209 00:05:28.160757  782653 ssh_runner.go:195] Run: which crictl
	I1209 00:05:28.165831  782653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 00:05:28.203701  782653 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 00:05:28.203843  782653 ssh_runner.go:195] Run: crio --version
	I1209 00:05:28.238662  782653 ssh_runner.go:195] Run: crio --version
	I1209 00:05:28.282458  782653 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	W1209 00:05:25.205127  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	W1209 00:05:27.701719  781906 pod_ready.go:104] pod "coredns-66bc5c9577-x9bsg" is not "Ready", error: <nil>
	I1209 00:05:24.565207  782623 crio.go:462] duration metric: took 3.029526862s to copy over tarball
	I1209 00:05:24.565324  782623 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W1209 00:05:24.886171  782285 node_ready.go:57] node "kindnet-474683" has "Ready":"False" status (will retry)
	W1209 00:05:27.383880  782285 node_ready.go:57] node "kindnet-474683" has "Ready":"False" status (will retry)
	I1209 00:05:28.287928  782653 main.go:143] libmachine: domain pause-165880 has defined MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:28.288417  782653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c2:7f", ip: ""} in network mk-pause-165880: {Iface:virbr5 ExpiryTime:2025-12-09 01:03:45 +0000 UTC Type:0 Mac:52:54:00:74:c2:7f Iaid: IPaddr:192.168.83.217 Prefix:24 Hostname:pause-165880 Clientid:01:52:54:00:74:c2:7f}
	I1209 00:05:28.288452  782653 main.go:143] libmachine: domain pause-165880 has defined IP address 192.168.83.217 and MAC address 52:54:00:74:c2:7f in network mk-pause-165880
	I1209 00:05:28.288697  782653 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1209 00:05:28.295003  782653 kubeadm.go:884] updating cluster {Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 00:05:28.295164  782653 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 00:05:28.295231  782653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:28.342780  782653 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 00:05:28.342817  782653 crio.go:433] Images already preloaded, skipping extraction
	I1209 00:05:28.342903  782653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 00:05:28.378433  782653 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 00:05:28.378469  782653 cache_images.go:86] Images are preloaded, skipping loading
	I1209 00:05:28.378482  782653 kubeadm.go:935] updating node { 192.168.83.217 8443 v1.34.2 crio true true} ...
	I1209 00:05:28.378663  782653 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-165880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 00:05:28.378778  782653 ssh_runner.go:195] Run: crio config
	I1209 00:05:28.437108  782653 cni.go:84] Creating CNI manager for ""
	I1209 00:05:28.437142  782653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 00:05:28.437168  782653 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 00:05:28.437201  782653 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.217 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-165880 NodeName:pause-165880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 00:05:28.437474  782653 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-165880"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 00:05:28.437593  782653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 00:05:28.453634  782653 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 00:05:28.453724  782653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 00:05:28.471239  782653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 00:05:28.493830  782653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 00:05:28.520139  782653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1209 00:05:28.549492  782653 ssh_runner.go:195] Run: grep 192.168.83.217	control-plane.minikube.internal$ /etc/hosts
	I1209 00:05:28.554579  782653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 00:05:28.753857  782653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 00:05:28.773412  782653 certs.go:69] Setting up /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880 for IP: 192.168.83.217
	I1209 00:05:28.773448  782653 certs.go:195] generating shared ca certs ...
	I1209 00:05:28.773475  782653 certs.go:227] acquiring lock for ca certs: {Name:mk069bbba4d83d251409b18022ca36eb869d942f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 00:05:28.773724  782653 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key
	I1209 00:05:28.773877  782653 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key
	I1209 00:05:28.773921  782653 certs.go:257] generating profile certs ...
	I1209 00:05:28.774082  782653 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/client.key
	I1209 00:05:28.774272  782653 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.key.66e6a13d
	I1209 00:05:28.774378  782653 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.key
	I1209 00:05:28.774576  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem (1338 bytes)
	W1209 00:05:28.774636  782653 certs.go:480] ignoring /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930_empty.pem, impossibly tiny 0 bytes
	I1209 00:05:28.774654  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 00:05:28.774697  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/ca.pem (1082 bytes)
	I1209 00:05:28.774736  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/cert.pem (1123 bytes)
	I1209 00:05:28.774784  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/certs/key.pem (1675 bytes)
	I1209 00:05:28.774872  782653 certs.go:484] found cert: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem (1708 bytes)
	I1209 00:05:28.776246  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 00:05:28.810505  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 00:05:28.842442  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 00:05:28.877226  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 00:05:28.908400  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 00:05:28.945631  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 00:05:28.979242  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 00:05:29.010043  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/pause-165880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 00:05:29.051848  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/certs/748930.pem --> /usr/share/ca-certificates/748930.pem (1338 bytes)
	I1209 00:05:29.095619  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/ssl/certs/7489302.pem --> /usr/share/ca-certificates/7489302.pem (1708 bytes)
	I1209 00:05:29.133913  782653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22075-744871/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 00:05:29.167271  782653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 00:05:29.189858  782653 ssh_runner.go:195] Run: openssl version
	I1209 00:05:29.197275  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.213666  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7489302.pem /etc/ssl/certs/7489302.pem
	I1209 00:05:29.226518  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.233152  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 23:15 /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.233284  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7489302.pem
	I1209 00:05:29.240720  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 00:05:29.257715  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.273541  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 00:05:29.287565  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.293576  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 23:04 /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.293641  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 00:05:29.301507  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 00:05:29.317647  782653 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.340472  782653 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/748930.pem /etc/ssl/certs/748930.pem
	I1209 00:05:29.392662  782653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.417247  782653 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 23:15 /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.417320  782653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748930.pem
	I1209 00:05:29.436429  782653 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 00:05:29.462985  782653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 00:05:29.472337  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 00:05:29.490957  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 00:05:29.504936  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 00:05:29.522957  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 00:05:29.538865  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 00:05:29.563145  782653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 00:05:29.581605  782653 kubeadm.go:401] StartCluster: {Name:pause-165880 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-165880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 00:05:29.581729  782653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 00:05:29.581823  782653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 00:05:29.688653  782653 cri.go:89] found id: "ed65d6af397731b1f5197ca1ee72a10abb2e0c22f62636e7bf2f7991071908cd"
	I1209 00:05:29.688692  782653 cri.go:89] found id: "9d522f4cf939d18e1de8df559158d043f98ae2ae01d8e14fe19b99d12c966f9f"
	I1209 00:05:29.688699  782653 cri.go:89] found id: "1797d0193cbe8ccd00b871fd19c9db605c89849a37a5010a5b0afa9022e4bf5f"
	I1209 00:05:29.688704  782653 cri.go:89] found id: "f00f2f5cffabec2b84bd23963ef53056ad87c8c1144d913e8afc9138caa5aa55"
	I1209 00:05:29.688709  782653 cri.go:89] found id: "db99b4ce7c7601a2d364718d8dd4fd7d04ea390b975cdec540ad671bbacaff1a"
	I1209 00:05:29.688728  782653 cri.go:89] found id: "3b25232d3c3957b7529e17f93abc0620cdd1d4bfa51469cdb8094edfce1aa828"
	I1209 00:05:29.688734  782653 cri.go:89] found id: ""
	I1209 00:05:29.688797  782653 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-165880 -n pause-165880
helpers_test.go:269: (dbg) Run:  kubectl --context pause-165880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (62.49s)

                                                
                                    

Test pass (373/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 25.68
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.18
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.17
12 TestDownloadOnly/v1.34.2/json-events 10.89
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.17
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.35.0-beta.0/json-events 11.07
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.68
31 TestOffline 53.39
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 129.83
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 11.58
44 TestAddons/parallel/Registry 19.76
45 TestAddons/parallel/RegistryCreds 0.77
47 TestAddons/parallel/InspektorGadget 11.81
48 TestAddons/parallel/MetricsServer 7.04
50 TestAddons/parallel/CSI 44.82
51 TestAddons/parallel/Headlamp 23.57
52 TestAddons/parallel/CloudSpanner 6.64
53 TestAddons/parallel/LocalPath 60.47
54 TestAddons/parallel/NvidiaDevicePlugin 6.95
55 TestAddons/parallel/Yakd 10.88
57 TestAddons/StoppedEnableDisable 90.4
58 TestCertOptions 81.53
59 TestCertExpiration 291.9
61 TestForceSystemdFlag 57.72
62 TestForceSystemdEnv 80.21
67 TestErrorSpam/setup 38.46
68 TestErrorSpam/start 0.36
69 TestErrorSpam/status 0.68
70 TestErrorSpam/pause 1.52
71 TestErrorSpam/unpause 1.73
72 TestErrorSpam/stop 5.26
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 48.15
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 50.58
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.12
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
84 TestFunctional/serial/CacheCmd/cache/add_local 2.25
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
89 TestFunctional/serial/CacheCmd/cache/delete 0.14
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 30.77
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.27
95 TestFunctional/serial/LogsFileCmd 1.29
96 TestFunctional/serial/InvalidService 4.03
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 10.13
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.74
106 TestFunctional/parallel/ServiceCmdConnect 28.6
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 43.67
110 TestFunctional/parallel/SSHCmd 0.34
111 TestFunctional/parallel/CpCmd 1.1
112 TestFunctional/parallel/MySQL 36.85
113 TestFunctional/parallel/FileSync 0.16
114 TestFunctional/parallel/CertSync 1.01
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
122 TestFunctional/parallel/License 0.41
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
127 TestFunctional/parallel/ImageCommands/ImageBuild 6.43
128 TestFunctional/parallel/ImageCommands/Setup 1.99
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.19
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.83
148 TestFunctional/parallel/ServiceCmd/DeployApp 31.18
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
150 TestFunctional/parallel/ProfileCmd/profile_list 0.42
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
152 TestFunctional/parallel/MountCmd/any-port 8.22
153 TestFunctional/parallel/MountCmd/specific-port 1.53
154 TestFunctional/parallel/ServiceCmd/List 0.42
155 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
157 TestFunctional/parallel/MountCmd/VerifyCleanup 1.13
158 TestFunctional/parallel/ServiceCmd/Format 0.36
159 TestFunctional/parallel/ServiceCmd/URL 0.27
160 TestFunctional/parallel/Version/short 0.07
161 TestFunctional/parallel/Version/components 0.41
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 72.87
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 28.43
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.99
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.23
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.5
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.14
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 45.07
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.08
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.34
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.33
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.55
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.5
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 11.35
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.68
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 26.66
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.2
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 40.27
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.42
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.27
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 31.1
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.2
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.27
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.25
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.41
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.94
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.32
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.31
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.88
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.4
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.96
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.09
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.09
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.1
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.68
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.88
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 2.22
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.8
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 24.21
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.37
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.36
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.35
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.29
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.27
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.32
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.26
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.31
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.35
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.52
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.43
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.33
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 198.94
262 TestMultiControlPlane/serial/DeployApp 7.38
263 TestMultiControlPlane/serial/PingHostFromPods 1.45
264 TestMultiControlPlane/serial/AddWorkerNode 46.15
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
267 TestMultiControlPlane/serial/CopyFile 11.47
268 TestMultiControlPlane/serial/StopSecondaryNode 88.09
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
270 TestMultiControlPlane/serial/RestartSecondaryNode 31.32
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 353.89
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.26
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
275 TestMultiControlPlane/serial/StopCluster 263.98
276 TestMultiControlPlane/serial/RestartCluster 100.46
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
278 TestMultiControlPlane/serial/AddSecondaryNode 66.69
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
284 TestJSONOutput/start/Command 77.73
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.71
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.61
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7.25
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.24
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 73.47
316 TestMountStart/serial/StartWithMountFirst 19.42
317 TestMountStart/serial/VerifyMountFirst 0.32
318 TestMountStart/serial/StartWithMountSecond 19.13
319 TestMountStart/serial/VerifyMountSecond 0.33
320 TestMountStart/serial/DeleteFirst 0.7
321 TestMountStart/serial/VerifyMountPostDelete 0.33
322 TestMountStart/serial/Stop 1.26
323 TestMountStart/serial/RestartStopped 18.52
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 94.95
328 TestMultiNode/serial/DeployApp2Nodes 6.21
329 TestMultiNode/serial/PingHostFrom2Pods 0.93
330 TestMultiNode/serial/AddNode 43
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.44
333 TestMultiNode/serial/CopyFile 6.31
334 TestMultiNode/serial/StopNode 2.36
335 TestMultiNode/serial/StartAfterStop 38.85
336 TestMultiNode/serial/RestartKeepsNodes 272.38
337 TestMultiNode/serial/DeleteNode 2.71
338 TestMultiNode/serial/StopMultiNode 143.9
339 TestMultiNode/serial/RestartMultiNode 111.86
340 TestMultiNode/serial/ValidateNameConflict 39.42
347 TestScheduledStopUnix 107.19
351 TestRunningBinaryUpgrade 121.69
353 TestKubernetesUpgrade 145.95
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
357 TestNoKubernetes/serial/StartWithK8s 97.93
365 TestNetworkPlugins/group/false 4.74
369 TestNoKubernetes/serial/StartWithStopK8s 29.86
370 TestNoKubernetes/serial/Start 31.62
371 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
372 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
373 TestNoKubernetes/serial/ProfileList 1.12
374 TestNoKubernetes/serial/Stop 1.3
375 TestNoKubernetes/serial/StartNoArgs 37.53
376 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
377 TestISOImage/Setup 32.14
378 TestStoppedBinaryUpgrade/Setup 3.76
379 TestStoppedBinaryUpgrade/Upgrade 94.13
381 TestISOImage/Binaries/crictl 0.2
382 TestISOImage/Binaries/curl 0.19
383 TestISOImage/Binaries/docker 0.17
384 TestISOImage/Binaries/git 0.18
385 TestISOImage/Binaries/iptables 0.19
386 TestISOImage/Binaries/podman 0.19
387 TestISOImage/Binaries/rsync 0.21
388 TestISOImage/Binaries/socat 0.19
389 TestISOImage/Binaries/wget 0.21
390 TestISOImage/Binaries/VBoxControl 0.22
391 TestISOImage/Binaries/VBoxService 0.27
400 TestPause/serial/Start 114
401 TestNetworkPlugins/group/auto/Start 99.28
402 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
403 TestNetworkPlugins/group/kindnet/Start 73.32
404 TestNetworkPlugins/group/calico/Start 96
406 TestNetworkPlugins/group/auto/KubeletFlags 0.24
407 TestNetworkPlugins/group/auto/NetCatPod 11.31
408 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
409 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
410 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
411 TestNetworkPlugins/group/auto/DNS 0.24
412 TestNetworkPlugins/group/auto/Localhost 0.27
413 TestNetworkPlugins/group/auto/HairPin 0.18
414 TestNetworkPlugins/group/kindnet/DNS 0.18
415 TestNetworkPlugins/group/kindnet/Localhost 0.15
416 TestNetworkPlugins/group/kindnet/HairPin 0.16
417 TestNetworkPlugins/group/custom-flannel/Start 75.08
418 TestNetworkPlugins/group/enable-default-cni/Start 96.7
419 TestNetworkPlugins/group/flannel/Start 96.58
420 TestNetworkPlugins/group/calico/ControllerPod 6.01
421 TestNetworkPlugins/group/calico/KubeletFlags 0.19
422 TestNetworkPlugins/group/calico/NetCatPod 11.75
423 TestNetworkPlugins/group/calico/DNS 0.18
424 TestNetworkPlugins/group/calico/Localhost 0.13
425 TestNetworkPlugins/group/calico/HairPin 0.14
426 TestNetworkPlugins/group/bridge/Start 90.07
427 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
428 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
429 TestNetworkPlugins/group/custom-flannel/DNS 0.19
430 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
431 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
432 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
433 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
435 TestStartStop/group/old-k8s-version/serial/FirstStart 92.41
436 TestNetworkPlugins/group/flannel/ControllerPod 6.01
437 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
438 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
439 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
440 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
441 TestNetworkPlugins/group/flannel/NetCatPod 11.31
442 TestNetworkPlugins/group/flannel/DNS 0.19
443 TestNetworkPlugins/group/flannel/Localhost 0.16
444 TestNetworkPlugins/group/flannel/HairPin 0.16
446 TestStartStop/group/no-preload/serial/FirstStart 71.24
448 TestStartStop/group/embed-certs/serial/FirstStart 65.27
449 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
450 TestNetworkPlugins/group/bridge/NetCatPod 12.32
451 TestNetworkPlugins/group/bridge/DNS 0.18
452 TestNetworkPlugins/group/bridge/Localhost 0.15
453 TestNetworkPlugins/group/bridge/HairPin 0.14
455 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.87
456 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
457 TestStartStop/group/no-preload/serial/DeployApp 10.42
458 TestStartStop/group/embed-certs/serial/DeployApp 10.31
459 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.41
460 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.27
461 TestStartStop/group/old-k8s-version/serial/Stop 84.06
462 TestStartStop/group/no-preload/serial/Stop 69.62
463 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
464 TestStartStop/group/embed-certs/serial/Stop 88.3
465 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
466 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
467 TestStartStop/group/no-preload/serial/SecondStart 49.54
468 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
469 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.49
470 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
471 TestStartStop/group/old-k8s-version/serial/SecondStart 43.12
472 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
473 TestStartStop/group/embed-certs/serial/SecondStart 54.43
474 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
475 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
476 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
477 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
478 TestStartStop/group/no-preload/serial/Pause 2.88
480 TestStartStop/group/newest-cni/serial/FirstStart 44.29
481 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
482 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
483 TestStartStop/group/old-k8s-version/serial/Pause 3.29
485 TestISOImage/PersistentMounts//data 0.2
486 TestISOImage/PersistentMounts//var/lib/docker 0.17
487 TestISOImage/PersistentMounts//var/lib/cni 0.28
488 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
489 TestISOImage/PersistentMounts//var/lib/minikube 0.19
490 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
491 TestISOImage/PersistentMounts//var/lib/boot2docker 0.19
492 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
493 TestISOImage/VersionJSON 0.17
494 TestISOImage/eBPFSupport 0.17
495 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
496 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
497 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.27
498 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
499 TestStartStop/group/embed-certs/serial/Pause 2.96
500 TestStartStop/group/newest-cni/serial/DeployApp 0
501 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
502 TestStartStop/group/newest-cni/serial/Stop 7.15
503 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
504 TestStartStop/group/newest-cni/serial/SecondStart 31.91
505 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
506 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
507 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
508 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
509 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
510 TestStartStop/group/newest-cni/serial/Pause 3.25
511 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
512 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.67
x
+
TestDownloadOnly/v1.28.0/json-events (25.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-304329 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-304329 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.683535881s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (25.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1208 23:03:37.342466  748930 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1208 23:03:37.342575  748930 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-304329
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-304329: exit status 85 (80.856562ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-304329 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-304329 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 23:03:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 23:03:11.717357  748943 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:03:11.717544  748943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:03:11.717558  748943 out.go:374] Setting ErrFile to fd 2...
	I1208 23:03:11.717564  748943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:03:11.717805  748943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	W1208 23:03:11.717962  748943 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22075-744871/.minikube/config/config.json: open /home/jenkins/minikube-integration/22075-744871/.minikube/config/config.json: no such file or directory
	I1208 23:03:11.718689  748943 out.go:368] Setting JSON to true
	I1208 23:03:11.719788  748943 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6332,"bootTime":1765228660,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:03:11.719868  748943 start.go:143] virtualization: kvm guest
	I1208 23:03:11.724282  748943 out.go:99] [download-only-304329] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1208 23:03:11.724486  748943 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball: no such file or directory
	I1208 23:03:11.724516  748943 notify.go:221] Checking for updates...
	I1208 23:03:11.725606  748943 out.go:171] MINIKUBE_LOCATION=22075
	I1208 23:03:11.726836  748943 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:03:11.727982  748943 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:03:11.729232  748943 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:03:11.730464  748943 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 23:03:11.732564  748943 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 23:03:11.732805  748943 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:03:11.766883  748943 out.go:99] Using the kvm2 driver based on user configuration
	I1208 23:03:11.766926  748943 start.go:309] selected driver: kvm2
	I1208 23:03:11.766935  748943 start.go:927] validating driver "kvm2" against <nil>
	I1208 23:03:11.767285  748943 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 23:03:11.767801  748943 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1208 23:03:11.767939  748943 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 23:03:11.767965  748943 cni.go:84] Creating CNI manager for ""
	I1208 23:03:11.768012  748943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:03:11.768023  748943 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 23:03:11.768061  748943 start.go:353] cluster config:
	{Name:download-only-304329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-304329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:03:11.768242  748943 iso.go:125] acquiring lock: {Name:mk3f3df5ef11b93dcc62a5800b46f2775cc6cbb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 23:03:11.769889  748943 out.go:99] Downloading VM boot image ...
	I1208 23:03:11.769939  748943 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22075-744871/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1208 23:03:23.351535  748943 out.go:99] Starting "download-only-304329" primary control-plane node in "download-only-304329" cluster
	I1208 23:03:23.351567  748943 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 23:03:23.452084  748943 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1208 23:03:23.452123  748943 cache.go:65] Caching tarball of preloaded images
	I1208 23:03:23.452316  748943 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 23:03:23.454098  748943 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1208 23:03:23.454123  748943 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1208 23:03:23.563398  748943 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1208 23:03:23.563534  748943 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1208 23:03:36.316790  748943 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1208 23:03:36.317255  748943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/download-only-304329/config.json ...
	I1208 23:03:36.317300  748943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/download-only-304329/config.json: {Name:mke94f623353db95541059e08df34087f29b0a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:03:36.317557  748943 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 23:03:36.317798  748943 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22075-744871/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-304329 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304329"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-304329
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (10.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-963535 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-963535 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.890365659s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (10.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1208 23:03:48.655882  748930 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1208 23:03:48.655929  748930 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-963535
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-963535: exit status 85 (77.573338ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-304329 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-304329 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │ 08 Dec 25 23:03 UTC │
	│ delete  │ -p download-only-304329                                                                                                                                                 │ download-only-304329 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │ 08 Dec 25 23:03 UTC │
	│ start   │ -o=json --download-only -p download-only-963535 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-963535 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 23:03:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 23:03:37.824293  749191 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:03:37.824645  749191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:03:37.824656  749191 out.go:374] Setting ErrFile to fd 2...
	I1208 23:03:37.824661  749191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:03:37.824856  749191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:03:37.825420  749191 out.go:368] Setting JSON to true
	I1208 23:03:37.826417  749191 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6358,"bootTime":1765228660,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:03:37.826508  749191 start.go:143] virtualization: kvm guest
	I1208 23:03:37.828509  749191 out.go:99] [download-only-963535] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 23:03:37.828748  749191 notify.go:221] Checking for updates...
	I1208 23:03:37.829819  749191 out.go:171] MINIKUBE_LOCATION=22075
	I1208 23:03:37.831065  749191 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:03:37.832204  749191 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:03:37.833473  749191 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:03:37.834622  749191 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 23:03:37.836503  749191 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 23:03:37.836794  749191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:03:37.870395  749191 out.go:99] Using the kvm2 driver based on user configuration
	I1208 23:03:37.870444  749191 start.go:309] selected driver: kvm2
	I1208 23:03:37.870454  749191 start.go:927] validating driver "kvm2" against <nil>
	I1208 23:03:37.870924  749191 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 23:03:37.871736  749191 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1208 23:03:37.871976  749191 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 23:03:37.872012  749191 cni.go:84] Creating CNI manager for ""
	I1208 23:03:37.872081  749191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:03:37.872096  749191 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 23:03:37.872161  749191 start.go:353] cluster config:
	{Name:download-only-963535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-963535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:03:37.872309  749191 iso.go:125] acquiring lock: {Name:mk3f3df5ef11b93dcc62a5800b46f2775cc6cbb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 23:03:37.873838  749191 out.go:99] Starting "download-only-963535" primary control-plane node in "download-only-963535" cluster
	I1208 23:03:37.873869  749191 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 23:03:38.398913  749191 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1208 23:03:38.398970  749191 cache.go:65] Caching tarball of preloaded images
	I1208 23:03:38.399195  749191 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 23:03:38.401018  749191 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1208 23:03:38.401045  749191 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1208 23:03:38.511650  749191 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1208 23:03:38.511712  749191 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1208 23:03:47.761886  749191 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 23:03:47.762358  749191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/download-only-963535/config.json ...
	I1208 23:03:47.762457  749191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/download-only-963535/config.json: {Name:mk6316aca8b6dadd10e9399d29b2530a105b230c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:03:47.762651  749191 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 23:03:47.762833  749191 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22075-744871/.minikube/cache/linux/amd64/v1.34.2/kubectl
	
	
	* The control-plane node download-only-963535 host does not exist
	  To start a cluster, run: "minikube start -p download-only-963535"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-963535
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (11.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-595699 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-595699 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.067966564s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (11.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1208 23:04:00.139043  748930 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1208 23:04:00.139097  748930 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-595699
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-595699: exit status 85 (80.467966ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-304329 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-304329 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │ 08 Dec 25 23:03 UTC │
	│ delete  │ -p download-only-304329                                                                                                                                                        │ download-only-304329 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │ 08 Dec 25 23:03 UTC │
	│ start   │ -o=json --download-only -p download-only-963535 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-963535 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │ 08 Dec 25 23:03 UTC │
	│ delete  │ -p download-only-963535                                                                                                                                                        │ download-only-963535 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │ 08 Dec 25 23:03 UTC │
	│ start   │ -o=json --download-only -p download-only-595699 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-595699 │ jenkins │ v1.37.0 │ 08 Dec 25 23:03 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 23:03:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 23:03:49.128503  749384 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:03:49.128604  749384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:03:49.128608  749384 out.go:374] Setting ErrFile to fd 2...
	I1208 23:03:49.128612  749384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:03:49.128832  749384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:03:49.129340  749384 out.go:368] Setting JSON to true
	I1208 23:03:49.130338  749384 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6369,"bootTime":1765228660,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:03:49.130419  749384 start.go:143] virtualization: kvm guest
	I1208 23:03:49.132215  749384 out.go:99] [download-only-595699] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 23:03:49.132445  749384 notify.go:221] Checking for updates...
	I1208 23:03:49.133550  749384 out.go:171] MINIKUBE_LOCATION=22075
	I1208 23:03:49.135381  749384 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:03:49.136563  749384 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:03:49.137556  749384 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:03:49.138663  749384 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 23:03:49.140602  749384 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 23:03:49.140905  749384 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:03:49.173557  749384 out.go:99] Using the kvm2 driver based on user configuration
	I1208 23:03:49.173602  749384 start.go:309] selected driver: kvm2
	I1208 23:03:49.173611  749384 start.go:927] validating driver "kvm2" against <nil>
	I1208 23:03:49.173981  749384 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 23:03:49.174577  749384 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1208 23:03:49.174751  749384 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 23:03:49.174783  749384 cni.go:84] Creating CNI manager for ""
	I1208 23:03:49.174853  749384 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 23:03:49.174865  749384 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 23:03:49.174925  749384 start.go:353] cluster config:
	{Name:download-only-595699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-595699 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:03:49.175040  749384 iso.go:125] acquiring lock: {Name:mk3f3df5ef11b93dcc62a5800b46f2775cc6cbb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 23:03:49.176284  749384 out.go:99] Starting "download-only-595699" primary control-plane node in "download-only-595699" cluster
	I1208 23:03:49.176312  749384 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 23:03:49.278543  749384 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1208 23:03:49.278588  749384 cache.go:65] Caching tarball of preloaded images
	I1208 23:03:49.278815  749384 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 23:03:49.280441  749384 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1208 23:03:49.280472  749384 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1208 23:03:49.391923  749384 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1208 23:03:49.391974  749384 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22075-744871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1208 23:03:59.008570  749384 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 23:03:59.009011  749384 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/download-only-595699/config.json ...
	I1208 23:03:59.009050  749384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/download-only-595699/config.json: {Name:mk17475f7794d5ac2de6b6c61a372470a8b8cf6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 23:03:59.009237  749384 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 23:03:59.009420  749384 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22075-744871/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-595699 host does not exist
	  To start a cluster, run: "minikube start -p download-only-595699"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-595699
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1208 23:04:01.023016  748930 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-322867 --alsologtostderr --binary-mirror http://127.0.0.1:43611 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-322867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-322867
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (53.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-888838 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-888838 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (52.461605564s)
helpers_test.go:175: Cleaning up "offline-crio-888838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-888838
--- PASS: TestOffline (53.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-192260
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-192260: exit status 85 (76.962421ms)

                                                
                                                
-- stdout --
	* Profile "addons-192260" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-192260"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-192260
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-192260: exit status 85 (81.453438ms)

                                                
                                                
-- stdout --
	* Profile "addons-192260" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-192260"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (129.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-192260 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-192260 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.833381052s)
--- PASS: TestAddons/Setup (129.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-192260 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-192260 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.58s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-192260 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-192260 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [99dafb9a-1bcd-4ac1-832c-8e428899d144] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [99dafb9a-1bcd-4ac1-832c-8e428899d144] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.005410622s
addons_test.go:694: (dbg) Run:  kubectl --context addons-192260 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-192260 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-192260 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.261034ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-2ds54" [f2664081-e338-412b-893e-73fbe9c38553] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009025199s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-6nf92" [eb6ff610-2f42-4c13-82ff-ba9cea5c6601] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003558218s
addons_test.go:392: (dbg) Run:  kubectl --context addons-192260 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-192260 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-192260 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.921810854s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 ip
2025/12/08 23:06:51 [DEBUG] GET http://192.168.39.248:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.76s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.420063ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-192260
addons_test.go:332: (dbg) Run:  kubectl --context addons-192260 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-2g456" [7f4cfdb1-656c-4d2a-b5cf-a6a0f9bf646d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004115397s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-192260 addons disable inspektor-gadget --alsologtostderr -v=1: (5.806230527s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.009506ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-kgd8r" [89d19b47-85d3-4998-b247-410e617d840f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.018975677s
addons_test.go:463: (dbg) Run:  kubectl --context addons-192260 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1208 23:06:51.820793  748930 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1208 23:06:51.827602  748930 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1208 23:06:51.827647  748930 kapi.go:107] duration metric: took 6.869881ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.88914ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-192260 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-192260 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [fc6f2464-fd84-4da0-a357-985106a49aec] Pending
helpers_test.go:352: "task-pv-pod" [fc6f2464-fd84-4da0-a357-985106a49aec] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004693783s
addons_test.go:572: (dbg) Run:  kubectl --context addons-192260 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-192260 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-192260 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-192260 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-192260 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-192260 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-192260 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [dbeb00e0-b7d1-4118-82d8-04a06b5d7b4a] Pending
helpers_test.go:352: "task-pv-pod-restore" [dbeb00e0-b7d1-4118-82d8-04a06b5d7b4a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [dbeb00e0-b7d1-4118-82d8-04a06b5d7b4a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004556966s
addons_test.go:614: (dbg) Run:  kubectl --context addons-192260 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-192260 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-192260 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-192260 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.026453082s)
--- PASS: TestAddons/parallel/CSI (44.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-192260 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-192260 --alsologtostderr -v=1: (1.163791157s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-z2jz7" [8760c2cb-a944-43b7-8d6f-9cacb2efebd1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-z2jz7" [8760c2cb-a944-43b7-8d6f-9cacb2efebd1] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.035473763s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-192260 addons disable headlamp --alsologtostderr -v=1: (6.373037933s)
--- PASS: TestAddons/parallel/Headlamp (23.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-8gwdg" [460a7416-94ea-4c15-bb10-1229bfa9204c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004620946s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-192260 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-192260 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e9673a40-9f20-4d55-b554-e8f963073693] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e9673a40-9f20-4d55-b554-e8f963073693] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e9673a40-9f20-4d55-b554-e8f963073693] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.300561116s
addons_test.go:967: (dbg) Run:  kubectl --context addons-192260 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 ssh "cat /opt/local-path-provisioner/pvc-b5bd1323-8a56-4e58-93b7-550ac9856f8e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-192260 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-192260 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-192260 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.308599455s)
--- PASS: TestAddons/parallel/LocalPath (60.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.95s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-zzn4k" [89aaba7e-70b7-4a68-b81c-78d0eca0b964] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007245387s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.95s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-g7btn" [fd368b35-6927-40ab-9c3d-6388a80bd99f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005441035s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-192260 addons disable yakd --alsologtostderr -v=1: (5.868741158s)
--- PASS: TestAddons/parallel/Yakd (10.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (90.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-192260
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-192260: (1m30.185617513s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-192260
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-192260
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-192260
--- PASS: TestAddons/StoppedEnableDisable (90.40s)

                                                
                                    
x
+
TestCertOptions (81.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-962577 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-962577 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m20.23623171s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-962577 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-962577 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-962577 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-962577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-962577
--- PASS: TestCertOptions (81.53s)

                                                
                                    
x
+
TestCertExpiration (291.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-134582 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-134582 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.273617654s)
E1209 00:01:12.273884  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-134582 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E1209 00:04:18.117505  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-134582 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (45.71935054s)
helpers_test.go:175: Cleaning up "cert-expiration-134582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-134582
--- PASS: TestCertExpiration (291.90s)

                                                
                                    
x
+
TestForceSystemdFlag (57.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-068060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-068060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (56.643131127s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-068060 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-068060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-068060
--- PASS: TestForceSystemdFlag (57.72s)

                                                
                                    
x
+
TestForceSystemdEnv (80.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-158533 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-158533 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.228153056s)
helpers_test.go:175: Cleaning up "force-systemd-env-158533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-158533
--- PASS: TestForceSystemdEnv (80.21s)

                                                
                                    
x
+
TestErrorSpam/setup (38.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-179941 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-179941 --driver=kvm2  --container-runtime=crio
E1208 23:11:12.280745  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:12.287253  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:12.298724  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:12.320227  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:12.361755  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:12.443275  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:12.604919  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:12.926686  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:13.568905  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:14.850601  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:17.413635  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:22.535403  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:11:32.777220  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-179941 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-179941 --driver=kvm2  --container-runtime=crio: (38.459052452s)
--- PASS: TestErrorSpam/setup (38.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 status
--- PASS: TestErrorSpam/status (0.68s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (5.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 stop: (1.865387876s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 stop: (1.541904206s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-179941 --log_dir /tmp/nospam-179941 stop: (1.851393684s)
--- PASS: TestErrorSpam/stop (5.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/test/nested/copy/748930/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944324 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1208 23:12:34.221996  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-944324 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (48.14552852s)
--- PASS: TestFunctional/serial/StartWithProxy (48.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1208 23:12:41.629718  748930 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944324 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-944324 --alsologtostderr -v=8: (50.581819345s)
functional_test.go:678: soft start took 50.582716571s for "functional-944324" cluster.
I1208 23:13:32.211928  748930 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (50.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-944324 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-944324 cache add registry.k8s.io/pause:3.3: (1.018015113s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-944324 cache add registry.k8s.io/pause:latest: (1.093378631s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-944324 /tmp/TestFunctionalserialCacheCmdcacheadd_local1283680717/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cache add minikube-local-cache-test:functional-944324
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-944324 cache add minikube-local-cache-test:functional-944324: (1.901991347s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cache delete minikube-local-cache-test:functional-944324
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-944324
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (174.015941ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 kubectl -- --context functional-944324 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-944324 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944324 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1208 23:13:56.143435  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-944324 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.766243283s)
functional_test.go:776: restart took 30.766378795s for "functional-944324" cluster.
I1208 23:14:10.725794  748930 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (30.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-944324 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-944324 logs: (1.271820203s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 logs --file /tmp/TestFunctionalserialLogsFileCmd2510993364/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-944324 logs --file /tmp/TestFunctionalserialLogsFileCmd2510993364/001/logs.txt: (1.286726514s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-944324 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-944324
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-944324: exit status 115 (229.419411ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.25:31953 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-944324 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 config get cpus: exit status 14 (63.322635ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 config get cpus: exit status 14 (73.252095ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-944324 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-944324 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 755371: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944324 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-944324 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (119.099386ms)

                                                
                                                
-- stdout --
	* [functional-944324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 23:14:59.363517  755464 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:14:59.363800  755464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:14:59.363810  755464 out.go:374] Setting ErrFile to fd 2...
	I1208 23:14:59.363814  755464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:14:59.364021  755464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:14:59.364498  755464 out.go:368] Setting JSON to false
	I1208 23:14:59.365581  755464 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7039,"bootTime":1765228660,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:14:59.365639  755464 start.go:143] virtualization: kvm guest
	I1208 23:14:59.367474  755464 out.go:179] * [functional-944324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 23:14:59.368676  755464 out.go:179]   - MINIKUBE_LOCATION=22075
	I1208 23:14:59.368698  755464 notify.go:221] Checking for updates...
	I1208 23:14:59.370778  755464 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:14:59.371851  755464 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:14:59.373221  755464 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:14:59.374721  755464 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 23:14:59.375760  755464 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 23:14:59.377357  755464 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:14:59.377923  755464 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:14:59.412595  755464 out.go:179] * Using the kvm2 driver based on existing profile
	I1208 23:14:59.413757  755464 start.go:309] selected driver: kvm2
	I1208 23:14:59.413777  755464 start.go:927] validating driver "kvm2" against &{Name:functional-944324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-944324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:14:59.413885  755464 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 23:14:59.415843  755464 out.go:203] 
	W1208 23:14:59.416980  755464 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 23:14:59.418280  755464 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944324 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944324 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-944324 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (118.330339ms)

                                                
                                                
-- stdout --
	* [functional-944324] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 23:14:56.882138  755259 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:14:56.882435  755259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:14:56.882445  755259 out.go:374] Setting ErrFile to fd 2...
	I1208 23:14:56.882449  755259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:14:56.882754  755259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:14:56.883154  755259 out.go:368] Setting JSON to false
	I1208 23:14:56.884144  755259 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7037,"bootTime":1765228660,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:14:56.884205  755259 start.go:143] virtualization: kvm guest
	I1208 23:14:56.885902  755259 out.go:179] * [functional-944324] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1208 23:14:56.887044  755259 out.go:179]   - MINIKUBE_LOCATION=22075
	I1208 23:14:56.887066  755259 notify.go:221] Checking for updates...
	I1208 23:14:56.889214  755259 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:14:56.890340  755259 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:14:56.891449  755259 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:14:56.892407  755259 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 23:14:56.896877  755259 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 23:14:56.898335  755259 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:14:56.898844  755259 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:14:56.929836  755259 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1208 23:14:56.930907  755259 start.go:309] selected driver: kvm2
	I1208 23:14:56.930920  755259 start.go:927] validating driver "kvm2" against &{Name:functional-944324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-944324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:14:56.931034  755259 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 23:14:56.932821  755259 out.go:203] 
	W1208 23:14:56.933857  755259 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 23:14:56.934888  755259 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-944324 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-944324 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-b2bnf" [3649fd1c-0672-4910-ac85-1c0e876b7270] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-b2bnf" [3649fd1c-0672-4910-ac85-1c0e876b7270] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 28.015413197s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.25:30792
functional_test.go:1680: http://192.168.39.25:30792: success! body:
Request served by hello-node-connect-7d85dfc575-b2bnf

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.25:30792
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [551fda09-1be2-4540-9392-97707d59f033] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00273364s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-944324 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-944324 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-944324 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-944324 apply -f testdata/storage-provisioner/pod.yaml
I1208 23:14:24.542177  748930 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0d550e28-93ca-4505-8b54-be0cf26384b9] Pending
helpers_test.go:352: "sp-pod" [0d550e28-93ca-4505-8b54-be0cf26384b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0d550e28-93ca-4505-8b54-be0cf26384b9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.004157631s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-944324 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-944324 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-944324 apply -f testdata/storage-provisioner/pod.yaml
I1208 23:14:55.647517  748930 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f96a3c26-e306-44e8-8bb8-7490ddee1a5e] Pending
helpers_test.go:352: "sp-pod" [f96a3c26-e306-44e8-8bb8-7490ddee1a5e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005764201s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-944324 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh -n functional-944324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cp functional-944324:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2448039060/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh -n functional-944324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh -n functional-944324 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-944324 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-6bcdcbc558-llxj4" [b016d2c2-2817-4654-813d-ae585cb1171f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-6bcdcbc558-llxj4" [b016d2c2-2817-4654-813d-ae585cb1171f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.00609265s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;": exit status 1 (174.273086ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 23:14:45.176335  748930 retry.go:31] will retry after 1.402753421s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;": exit status 1 (324.388214ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 23:14:46.904933  748930 retry.go:31] will retry after 1.789493138s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;": exit status 1 (156.067942ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 23:14:48.851694  748930 retry.go:31] will retry after 1.852172096s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;": exit status 1 (162.258011ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 23:14:50.866546  748930 retry.go:31] will retry after 4.611492244s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-944324 exec mysql-6bcdcbc558-llxj4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/748930/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo cat /etc/test/nested/copy/748930/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/748930.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo cat /etc/ssl/certs/748930.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/748930.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo cat /usr/share/ca-certificates/748930.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7489302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo cat /etc/ssl/certs/7489302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7489302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo cat /usr/share/ca-certificates/7489302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-944324 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 ssh "sudo systemctl is-active docker": exit status 1 (188.222462ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 ssh "sudo systemctl is-active containerd": exit status 1 (193.820282ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944324 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-944324
localhost/kicbase/echo-server:functional-944324
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944324 image ls --format short --alsologtostderr:
I1208 23:14:59.760477  755551 out.go:360] Setting OutFile to fd 1 ...
I1208 23:14:59.760605  755551 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:14:59.760611  755551 out.go:374] Setting ErrFile to fd 2...
I1208 23:14:59.760617  755551 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:14:59.761357  755551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:14:59.762123  755551 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:14:59.762279  755551 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:14:59.764739  755551 ssh_runner.go:195] Run: systemctl --version
I1208 23:14:59.766994  755551 main.go:143] libmachine: domain functional-944324 has defined MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:14:59.767404  755551 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:39:30", ip: ""} in network mk-functional-944324: {Iface:virbr1 ExpiryTime:2025-12-09 00:12:08 +0000 UTC Type:0 Mac:52:54:00:be:39:30 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-944324 Clientid:01:52:54:00:be:39:30}
I1208 23:14:59.767433  755551 main.go:143] libmachine: domain functional-944324 has defined IP address 192.168.39.25 and MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:14:59.767628  755551 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-944324/id_rsa Username:docker}
I1208 23:14:59.851834  755551 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944324 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-944324  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/minikube-local-cache-test     │ functional-944324  │ 52a32ec9e158c │ 3.33kB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944324 image ls --format table --alsologtostderr:
I1208 23:15:00.253685  755613 out.go:360] Setting OutFile to fd 1 ...
I1208 23:15:00.254056  755613 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:15:00.254072  755613 out.go:374] Setting ErrFile to fd 2...
I1208 23:15:00.254079  755613 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:15:00.254519  755613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:15:00.255461  755613 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:15:00.255650  755613 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:15:00.258461  755613 ssh_runner.go:195] Run: systemctl --version
I1208 23:15:00.261249  755613 main.go:143] libmachine: domain functional-944324 has defined MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:15:00.261751  755613 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:39:30", ip: ""} in network mk-functional-944324: {Iface:virbr1 ExpiryTime:2025-12-09 00:12:08 +0000 UTC Type:0 Mac:52:54:00:be:39:30 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-944324 Clientid:01:52:54:00:be:39:30}
I1208 23:15:00.261792  755613 main.go:143] libmachine: domain functional-944324 has defined IP address 192.168.39.25 and MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:15:00.261966  755613 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-944324/id_rsa Username:docker}
I1208 23:15:00.343463  755613 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944324 image ls --format json --alsologtostderr:
[{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha2
56:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-944324"],"size":"4943877"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["re
gistry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52a32ec9e158c6c7fe91a52cec193487e52952b265fe1810ad819735ca0c0d6a","repoDigests":["localhost/minikube-local-cache-test@sha256:c96675b89b527e6ac90fbf1f7c0fc0dfdd7e327ad0d9b49360993b1127a0c881"],"repoTags":["localhost/minikube-local-cache-test:functional-944324"],"size":"3328"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9d
a","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa
7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["reg
istry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944324 image ls --format json --alsologtostderr:
I1208 23:15:00.047726  755591 out.go:360] Setting OutFile to fd 1 ...
I1208 23:15:00.047842  755591 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:15:00.047850  755591 out.go:374] Setting ErrFile to fd 2...
I1208 23:15:00.047855  755591 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:15:00.048029  755591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:15:00.048673  755591 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:15:00.048800  755591 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:15:00.051032  755591 ssh_runner.go:195] Run: systemctl --version
I1208 23:15:00.053309  755591 main.go:143] libmachine: domain functional-944324 has defined MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:15:00.053729  755591 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:39:30", ip: ""} in network mk-functional-944324: {Iface:virbr1 ExpiryTime:2025-12-09 00:12:08 +0000 UTC Type:0 Mac:52:54:00:be:39:30 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-944324 Clientid:01:52:54:00:be:39:30}
I1208 23:15:00.053758  755591 main.go:143] libmachine: domain functional-944324 has defined IP address 192.168.39.25 and MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:15:00.053883  755591 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-944324/id_rsa Username:docker}
I1208 23:15:00.144534  755591 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944324 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-944324
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 52a32ec9e158c6c7fe91a52cec193487e52952b265fe1810ad819735ca0c0d6a
repoDigests:
- localhost/minikube-local-cache-test@sha256:c96675b89b527e6ac90fbf1f7c0fc0dfdd7e327ad0d9b49360993b1127a0c881
repoTags:
- localhost/minikube-local-cache-test:functional-944324
size: "3328"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944324 image ls --format yaml --alsologtostderr:
I1208 23:14:59.842642  755571 out.go:360] Setting OutFile to fd 1 ...
I1208 23:14:59.842885  755571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:14:59.842894  755571 out.go:374] Setting ErrFile to fd 2...
I1208 23:14:59.842898  755571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:14:59.843115  755571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:14:59.843749  755571 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:14:59.843846  755571 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:14:59.846188  755571 ssh_runner.go:195] Run: systemctl --version
I1208 23:14:59.847991  755571 main.go:143] libmachine: domain functional-944324 has defined MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:14:59.848373  755571 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:39:30", ip: ""} in network mk-functional-944324: {Iface:virbr1 ExpiryTime:2025-12-09 00:12:08 +0000 UTC Type:0 Mac:52:54:00:be:39:30 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-944324 Clientid:01:52:54:00:be:39:30}
I1208 23:14:59.848416  755571 main.go:143] libmachine: domain functional-944324 has defined IP address 192.168.39.25 and MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:14:59.848549  755571 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-944324/id_rsa Username:docker}
I1208 23:14:59.933639  755571 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 ssh pgrep buildkitd: exit status 1 (172.521625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image build -t localhost/my-image:functional-944324 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-944324 image build -t localhost/my-image:functional-944324 testdata/build --alsologtostderr: (6.061619946s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944324 image build -t localhost/my-image:functional-944324 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bd27a8ce481
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-944324
--> 619a92688cb
Successfully tagged localhost/my-image:functional-944324
619a92688cb15948a6f2b0c32923ac361adb0a50952d03352f61cddd9b5f2d21
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944324 image build -t localhost/my-image:functional-944324 testdata/build --alsologtostderr:
I1208 23:15:00.133121  755602 out.go:360] Setting OutFile to fd 1 ...
I1208 23:15:00.133464  755602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:15:00.133477  755602 out.go:374] Setting ErrFile to fd 2...
I1208 23:15:00.133482  755602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:15:00.133680  755602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:15:00.134292  755602 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:15:00.135048  755602 config.go:182] Loaded profile config "functional-944324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 23:15:00.137659  755602 ssh_runner.go:195] Run: systemctl --version
I1208 23:15:00.140551  755602 main.go:143] libmachine: domain functional-944324 has defined MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:15:00.141070  755602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:39:30", ip: ""} in network mk-functional-944324: {Iface:virbr1 ExpiryTime:2025-12-09 00:12:08 +0000 UTC Type:0 Mac:52:54:00:be:39:30 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-944324 Clientid:01:52:54:00:be:39:30}
I1208 23:15:00.141118  755602 main.go:143] libmachine: domain functional-944324 has defined IP address 192.168.39.25 and MAC address 52:54:00:be:39:30 in network mk-functional-944324
I1208 23:15:00.141333  755602 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-944324/id_rsa Username:docker}
I1208 23:15:00.226895  755602 build_images.go:162] Building image from path: /tmp/build.3534548600.tar
I1208 23:15:00.226981  755602 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 23:15:00.242854  755602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3534548600.tar
I1208 23:15:00.248718  755602 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3534548600.tar: stat -c "%s %y" /var/lib/minikube/build/build.3534548600.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3534548600.tar': No such file or directory
I1208 23:15:00.248758  755602 ssh_runner.go:362] scp /tmp/build.3534548600.tar --> /var/lib/minikube/build/build.3534548600.tar (3072 bytes)
I1208 23:15:00.289314  755602 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3534548600
I1208 23:15:00.302910  755602 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3534548600 -xf /var/lib/minikube/build/build.3534548600.tar
I1208 23:15:00.314833  755602 crio.go:315] Building image: /var/lib/minikube/build/build.3534548600
I1208 23:15:00.314911  755602 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-944324 /var/lib/minikube/build/build.3534548600 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1208 23:15:06.092940  755602 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-944324 /var/lib/minikube/build/build.3534548600 --cgroup-manager=cgroupfs: (5.777999178s)
I1208 23:15:06.093044  755602 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3534548600
I1208 23:15:06.108796  755602 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3534548600.tar
I1208 23:15:06.122395  755602 build_images.go:218] Built localhost/my-image:functional-944324 from /tmp/build.3534548600.tar
I1208 23:15:06.122454  755602 build_images.go:134] succeeded building to: functional-944324
I1208 23:15:06.122462  755602 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls
2025/12/08 23:15:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.961414717s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-944324
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image load --daemon kicbase/echo-server:functional-944324 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-944324 image load --daemon kicbase/echo-server:functional-944324 --alsologtostderr: (1.09674745s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image load --daemon kicbase/echo-server:functional-944324 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-944324
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image load --daemon kicbase/echo-server:functional-944324 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image save kicbase/echo-server:functional-944324 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image rm kicbase/echo-server:functional-944324 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-944324
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 image save --daemon kicbase/echo-server:functional-944324 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-944324
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (31.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-944324 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-944324 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-x5rj4" [f2c9d6ed-f809-42e3-be3e-94cc713a58f1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-x5rj4" [f2c9d6ed-f809-42e3-be3e-94cc713a58f1] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 31.004379444s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (31.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "353.467064ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.286689ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "276.752711ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.436531ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdany-port1411939310/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765235688740645343" to /tmp/TestFunctionalparallelMountCmdany-port1411939310/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765235688740645343" to /tmp/TestFunctionalparallelMountCmdany-port1411939310/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765235688740645343" to /tmp/TestFunctionalparallelMountCmdany-port1411939310/001/test-1765235688740645343
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (172.206642ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 23:14:48.913281  748930 retry.go:31] will retry after 738.754294ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 23:14 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 23:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 23:14 test-1765235688740645343
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh cat /mount-9p/test-1765235688740645343
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-944324 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [aeb79a5c-fd88-4768-98f9-e0eba7c923cd] Pending
helpers_test.go:352: "busybox-mount" [aeb79a5c-fd88-4768-98f9-e0eba7c923cd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [aeb79a5c-fd88-4768-98f9-e0eba7c923cd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [aeb79a5c-fd88-4768-98f9-e0eba7c923cd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007265894s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-944324 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdany-port1411939310/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdspecific-port3049325288/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (172.680574ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 23:14:57.129123  748930 retry.go:31] will retry after 569.967939ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdspecific-port3049325288/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 ssh "sudo umount -f /mount-9p": exit status 1 (224.717132ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-944324 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdspecific-port3049325288/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 service list -o json
functional_test.go:1504: Took "460.787555ms" to run "out/minikube-linux-amd64 -p functional-944324 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.25:32171
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1423890122/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1423890122/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1423890122/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T" /mount1: exit status 1 (218.801529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 23:14:58.711137  748930 retry.go:31] will retry after 338.575874ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-944324 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1423890122/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1423890122/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944324 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1423890122/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.25:32171
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-944324 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-944324
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-944324
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-944324
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22075-744871/.minikube/files/etc/test/nested/copy/748930/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (72.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136601 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1208 23:16:12.277248  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-136601 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m12.868862818s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (72.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (28.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1208 23:16:21.067042  748930 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136601 --alsologtostderr -v=8
E1208 23:16:39.985683  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-136601 --alsologtostderr -v=8: (28.426071061s)
functional_test.go:678: soft start took 28.426583233s for "functional-136601" cluster.
I1208 23:16:49.493447  748930 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (28.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-136601 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 cache add registry.k8s.io/pause:3.3: (1.022907232s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 cache add registry.k8s.io/pause:latest: (1.010360466s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3065929627/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cache add minikube-local-cache-test:functional-136601
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 cache add minikube-local-cache-test:functional-136601: (1.926148502s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cache delete minikube-local-cache-test:functional-136601
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-136601
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.414123ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 kubectl -- --context functional-136601 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-136601 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (45.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136601 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-136601 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.070618903s)
functional_test.go:776: restart took 45.070786287s for "functional-136601" cluster.
I1208 23:17:42.165535  748930 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (45.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-136601 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 logs: (1.340957711s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3067453349/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3067453349/001/logs.txt: (1.332291038s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-136601 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-136601
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-136601: exit status 115 (258.812263ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.20:31928 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-136601 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-136601 delete -f testdata/invalidsvc.yaml: (1.074003636s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 config get cpus: exit status 14 (72.647678ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 config get cpus: exit status 14 (79.326645ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (11.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-136601 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-136601 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 758324: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (11.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136601 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-136601 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (114.695291ms)

                                                
                                                
-- stdout --
	* [functional-136601] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 23:18:22.122654  758280 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:18:22.122931  758280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:18:22.122943  758280 out.go:374] Setting ErrFile to fd 2...
	I1208 23:18:22.122947  758280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:18:22.123182  758280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:18:22.123703  758280 out.go:368] Setting JSON to false
	I1208 23:18:22.124651  758280 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7242,"bootTime":1765228660,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:18:22.124725  758280 start.go:143] virtualization: kvm guest
	I1208 23:18:22.126552  758280 out.go:179] * [functional-136601] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 23:18:22.127672  758280 notify.go:221] Checking for updates...
	I1208 23:18:22.127687  758280 out.go:179]   - MINIKUBE_LOCATION=22075
	I1208 23:18:22.129239  758280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:18:22.130574  758280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:18:22.131854  758280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:18:22.132938  758280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 23:18:22.133889  758280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 23:18:22.135215  758280 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 23:18:22.135748  758280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:18:22.167836  758280 out.go:179] * Using the kvm2 driver based on existing profile
	I1208 23:18:22.169063  758280 start.go:309] selected driver: kvm2
	I1208 23:18:22.169083  758280 start.go:927] validating driver "kvm2" against &{Name:functional-136601 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-136601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:18:22.169191  758280 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 23:18:22.171152  758280 out.go:203] 
	W1208 23:18:22.172330  758280 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 23:18:22.173521  758280 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136601 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136601 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-136601 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (118.083496ms)

                                                
                                                
-- stdout --
	* [functional-136601] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 23:18:22.005746  758264 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:18:22.005851  758264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:18:22.005859  758264 out.go:374] Setting ErrFile to fd 2...
	I1208 23:18:22.005865  758264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:18:22.006193  758264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:18:22.006698  758264 out.go:368] Setting JSON to false
	I1208 23:18:22.007614  758264 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7242,"bootTime":1765228660,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 23:18:22.007670  758264 start.go:143] virtualization: kvm guest
	I1208 23:18:22.009789  758264 out.go:179] * [functional-136601] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1208 23:18:22.011052  758264 out.go:179]   - MINIKUBE_LOCATION=22075
	I1208 23:18:22.011059  758264 notify.go:221] Checking for updates...
	I1208 23:18:22.013078  758264 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 23:18:22.014172  758264 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1208 23:18:22.015316  758264 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1208 23:18:22.016356  758264 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 23:18:22.017541  758264 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 23:18:22.019243  758264 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 23:18:22.020051  758264 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 23:18:22.052824  758264 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1208 23:18:22.053904  758264 start.go:309] selected driver: kvm2
	I1208 23:18:22.053920  758264 start.go:927] validating driver "kvm2" against &{Name:functional-136601 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-136601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 23:18:22.054037  758264 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 23:18:22.056009  758264 out.go:203] 
	W1208 23:18:22.057052  758264 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 23:18:22.058096  758264 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (26.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-136601 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-136601 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-hdgj8" [e7ec23a1-f881-4ceb-8f75-8c9fb0b27ea9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-hdgj8" [e7ec23a1-f881-4ceb-8f75-8c9fb0b27ea9] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.032336583s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.20:32115
functional_test.go:1680: http://192.168.39.20:32115: success! body:
Request served by hello-node-connect-9f67c86d4-hdgj8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.20:32115
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (26.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (40.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [332128e6-f5e3-457a-a93b-6e0a483d55de] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006106184s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-136601 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-136601 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-136601 get pvc myclaim -o=json
I1208 23:17:56.461771  748930 retry.go:31] will retry after 1.47858477s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:b2c124e8-27de-4b3b-b590-d8cf9e355567 ResourceVersion:699 Generation:0 CreationTimestamp:2025-12-08 23:17:56 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a1a3d0 VolumeMode:0xc001a1a3e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-136601 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-136601 apply -f testdata/storage-provisioner/pod.yaml
I1208 23:17:58.168559  748930 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [abf2b9fd-05c6-4875-860c-31965be74d89] Pending
helpers_test.go:352: "sp-pod" [abf2b9fd-05c6-4875-860c-31965be74d89] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [abf2b9fd-05c6-4875-860c-31965be74d89] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004680202s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-136601 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-136601 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-136601 apply -f testdata/storage-provisioner/pod.yaml
I1208 23:18:25.266963  748930 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [03ecb0d8-e14e-49ae-ae10-139d186d7e78] Pending
helpers_test.go:352: "sp-pod" [03ecb0d8-e14e-49ae-ae10-139d186d7e78] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.007583469s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-136601 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (40.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh -n functional-136601 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cp functional-136601:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp891134889/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh -n functional-136601 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh -n functional-136601 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (31.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-136601 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-7d7b65bc95-n8snd" [dbe74b75-41ae-4870-ba6c-838419ea7c45] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-7d7b65bc95-n8snd" [dbe74b75-41ae-4870-ba6c-838419ea7c45] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 24.004615698s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-136601 exec mysql-7d7b65bc95-n8snd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-136601 exec mysql-7d7b65bc95-n8snd -- mysql -ppassword -e "show databases;": exit status 1 (154.790735ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 23:18:14.538789  748930 retry.go:31] will retry after 1.391395931s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-136601 exec mysql-7d7b65bc95-n8snd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-136601 exec mysql-7d7b65bc95-n8snd -- mysql -ppassword -e "show databases;": exit status 1 (204.427807ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 23:18:16.135593  748930 retry.go:31] will retry after 1.727605753s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-136601 exec mysql-7d7b65bc95-n8snd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-136601 exec mysql-7d7b65bc95-n8snd -- mysql -ppassword -e "show databases;": exit status 1 (242.601108ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 23:18:18.106959  748930 retry.go:31] will retry after 2.988289262s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-136601 exec mysql-7d7b65bc95-n8snd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (31.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/748930/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo cat /etc/test/nested/copy/748930/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/748930.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo cat /etc/ssl/certs/748930.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/748930.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo cat /usr/share/ca-certificates/748930.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7489302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo cat /etc/ssl/certs/7489302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7489302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo cat /usr/share/ca-certificates/7489302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-136601 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 ssh "sudo systemctl is-active docker": exit status 1 (214.871297ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 ssh "sudo systemctl is-active containerd": exit status 1 (194.725102ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136601 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-136601
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136601 image ls --format short --alsologtostderr:
I1208 23:18:30.916306  758669 out.go:360] Setting OutFile to fd 1 ...
I1208 23:18:30.916647  758669 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:30.916660  758669 out.go:374] Setting ErrFile to fd 2...
I1208 23:18:30.916666  758669 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:30.917016  758669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:18:30.917892  758669 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:30.918041  758669 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:30.920537  758669 ssh_runner.go:195] Run: systemctl --version
I1208 23:18:30.922896  758669 main.go:143] libmachine: domain functional-136601 has defined MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:30.923306  758669 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:55:24", ip: ""} in network mk-functional-136601: {Iface:virbr1 ExpiryTime:2025-12-09 00:15:23 +0000 UTC Type:0 Mac:52:54:00:f9:55:24 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:functional-136601 Clientid:01:52:54:00:f9:55:24}
I1208 23:18:30.923334  758669 main.go:143] libmachine: domain functional-136601 has defined IP address 192.168.39.20 and MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:30.923495  758669 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-136601/id_rsa Username:docker}
I1208 23:18:31.016354  758669 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136601 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-136601  │ 52a32ec9e158c │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136601 image ls --format table --alsologtostderr:
I1208 23:18:31.858239  758745 out.go:360] Setting OutFile to fd 1 ...
I1208 23:18:31.858337  758745 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:31.858344  758745 out.go:374] Setting ErrFile to fd 2...
I1208 23:18:31.858350  758745 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:31.858569  758745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:18:31.859204  758745 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:31.859319  758745 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:31.861418  758745 ssh_runner.go:195] Run: systemctl --version
I1208 23:18:31.863279  758745 main.go:143] libmachine: domain functional-136601 has defined MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:31.863673  758745 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:55:24", ip: ""} in network mk-functional-136601: {Iface:virbr1 ExpiryTime:2025-12-09 00:15:23 +0000 UTC Type:0 Mac:52:54:00:f9:55:24 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:functional-136601 Clientid:01:52:54:00:f9:55:24}
I1208 23:18:31.863702  758745 main.go:143] libmachine: domain functional-136601 has defined IP address 192.168.39.20 and MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:31.863830  758745 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-136601/id_rsa Username:docker}
I1208 23:18:32.016109  758745 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136601 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86"],"repoTags":["docker.io/kicbase/echo-server:latest"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c0
20289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/
coredns:v1.13.1"],"size":"79193994"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"
},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1
de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTag
s":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"52a32ec9e158c6c7fe91a52cec193487e52952b265fe1810ad819735ca0c0d6a","repoDigests":["localhost/minikube-local-cache-test@sha256:c96675b89b527e6ac90fbf1f7c0fc0dfdd7e327ad0d9b49360993b1127a0c881"],"repoTags":["localhost/minikube-local-cache-test:functional-136601"],"size":"3328"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/k
ube-scheduler:v1.35.0-beta.0"],"size":"52747095"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136601 image ls --format json --alsologtostderr:
I1208 23:18:31.847388  758739 out.go:360] Setting OutFile to fd 1 ...
I1208 23:18:31.847714  758739 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:31.847728  758739 out.go:374] Setting ErrFile to fd 2...
I1208 23:18:31.847735  758739 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:31.847966  758739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:18:31.848607  758739 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:31.848753  758739 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:31.851750  758739 ssh_runner.go:195] Run: systemctl --version
I1208 23:18:31.854696  758739 main.go:143] libmachine: domain functional-136601 has defined MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:31.855319  758739 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:55:24", ip: ""} in network mk-functional-136601: {Iface:virbr1 ExpiryTime:2025-12-09 00:15:23 +0000 UTC Type:0 Mac:52:54:00:f9:55:24 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:functional-136601 Clientid:01:52:54:00:f9:55:24}
I1208 23:18:31.855353  758739 main.go:143] libmachine: domain functional-136601 has defined IP address 192.168.39.20 and MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:31.855643  758739 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-136601/id_rsa Username:docker}
I1208 23:18:31.972743  758739 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136601 image ls --format yaml --alsologtostderr:
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
repoTags:
- docker.io/kicbase/echo-server:latest
size: "4943877"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52a32ec9e158c6c7fe91a52cec193487e52952b265fe1810ad819735ca0c0d6a
repoDigests:
- localhost/minikube-local-cache-test@sha256:c96675b89b527e6ac90fbf1f7c0fc0dfdd7e327ad0d9b49360993b1127a0c881
repoTags:
- localhost/minikube-local-cache-test:functional-136601
size: "3328"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136601 image ls --format yaml --alsologtostderr:
I1208 23:18:30.973662  758694 out.go:360] Setting OutFile to fd 1 ...
I1208 23:18:30.973971  758694 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:30.973982  758694 out.go:374] Setting ErrFile to fd 2...
I1208 23:18:30.973989  758694 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:30.974185  758694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:18:30.974840  758694 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:30.974969  758694 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:30.977300  758694 ssh_runner.go:195] Run: systemctl --version
I1208 23:18:30.979695  758694 main.go:143] libmachine: domain functional-136601 has defined MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:30.980139  758694 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:55:24", ip: ""} in network mk-functional-136601: {Iface:virbr1 ExpiryTime:2025-12-09 00:15:23 +0000 UTC Type:0 Mac:52:54:00:f9:55:24 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:functional-136601 Clientid:01:52:54:00:f9:55:24}
I1208 23:18:30.980176  758694 main.go:143] libmachine: domain functional-136601 has defined IP address 192.168.39.20 and MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:30.980332  758694 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-136601/id_rsa Username:docker}
I1208 23:18:31.074082  758694 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 ssh pgrep buildkitd: exit status 1 (161.675262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image build -t localhost/my-image:functional-136601 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 image build -t localhost/my-image:functional-136601 testdata/build --alsologtostderr: (4.03788322s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136601 image build -t localhost/my-image:functional-136601 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 39b00bea8e1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-136601
--> d3e49a2888e
Successfully tagged localhost/my-image:functional-136601
d3e49a2888eda0a97cebf571e8fc4a7fab39850432267cbc837d76bc478ef3fc
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136601 image build -t localhost/my-image:functional-136601 testdata/build --alsologtostderr:
I1208 23:18:31.644503  758728 out.go:360] Setting OutFile to fd 1 ...
I1208 23:18:31.644634  758728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:31.644643  758728 out.go:374] Setting ErrFile to fd 2...
I1208 23:18:31.644647  758728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 23:18:31.644854  758728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
I1208 23:18:31.645449  758728 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:31.646113  758728 config.go:182] Loaded profile config "functional-136601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 23:18:31.648541  758728 ssh_runner.go:195] Run: systemctl --version
I1208 23:18:31.650638  758728 main.go:143] libmachine: domain functional-136601 has defined MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:31.651041  758728 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:55:24", ip: ""} in network mk-functional-136601: {Iface:virbr1 ExpiryTime:2025-12-09 00:15:23 +0000 UTC Type:0 Mac:52:54:00:f9:55:24 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:functional-136601 Clientid:01:52:54:00:f9:55:24}
I1208 23:18:31.651071  758728 main.go:143] libmachine: domain functional-136601 has defined IP address 192.168.39.20 and MAC address 52:54:00:f9:55:24 in network mk-functional-136601
I1208 23:18:31.651233  758728 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/functional-136601/id_rsa Username:docker}
I1208 23:18:31.736068  758728 build_images.go:162] Building image from path: /tmp/build.4077288083.tar
I1208 23:18:31.736197  758728 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 23:18:31.750089  758728 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4077288083.tar
I1208 23:18:31.755703  758728 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4077288083.tar: stat -c "%s %y" /var/lib/minikube/build/build.4077288083.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4077288083.tar': No such file or directory
I1208 23:18:31.755755  758728 ssh_runner.go:362] scp /tmp/build.4077288083.tar --> /var/lib/minikube/build/build.4077288083.tar (3072 bytes)
I1208 23:18:31.803199  758728 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4077288083
I1208 23:18:31.819620  758728 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4077288083 -xf /var/lib/minikube/build/build.4077288083.tar
I1208 23:18:31.834979  758728 crio.go:315] Building image: /var/lib/minikube/build/build.4077288083
I1208 23:18:31.835075  758728 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-136601 /var/lib/minikube/build/build.4077288083 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1208 23:18:35.575090  758728 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-136601 /var/lib/minikube/build/build.4077288083 --cgroup-manager=cgroupfs: (3.739981543s)
I1208 23:18:35.575175  758728 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4077288083
I1208 23:18:35.590356  758728 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4077288083.tar
I1208 23:18:35.604765  758728 build_images.go:218] Built localhost/my-image:functional-136601 from /tmp/build.4077288083.tar
I1208 23:18:35.604810  758728 build_images.go:134] succeeded building to: functional-136601
I1208 23:18:35.604815  758728 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-136601
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image load --daemon kicbase/echo-server:functional-136601 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 image load --daemon kicbase/echo-server:functional-136601 --alsologtostderr: (1.468403053s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image load --daemon kicbase/echo-server:functional-136601 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-136601
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image load --daemon kicbase/echo-server:functional-136601 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 image save kicbase/echo-server:functional-136601 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (24.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-136601 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-136601 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-hkvlp" [1630d6ee-40c0-407d-9553-7f0ad8d7ac9e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-hkvlp" [1630d6ee-40c0-407d-9553-7f0ad8d7ac9e] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.003972051s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (24.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "291.398201ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.214041ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "284.413725ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.21ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo699420099/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765235899160925181" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo699420099/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765235899160925181" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo699420099/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765235899160925181" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo699420099/001/test-1765235899160925181
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.039018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 23:18:19.334320  748930 retry.go:31] will retry after 695.116734ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 23:18 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 23:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 23:18 test-1765235899160925181
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh cat /mount-9p/test-1765235899160925181
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-136601 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [491c3bd1-2d42-4091-988d-3beac1961953] Pending
helpers_test.go:352: "busybox-mount" [491c3bd1-2d42-4091-988d-3beac1961953] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [491c3bd1-2d42-4091-988d-3beac1961953] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [491c3bd1-2d42-4091-988d-3beac1961953] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003423481s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-136601 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo699420099/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 service list: (1.265036638s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo673223848/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (176.17333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 23:18:27.625069  748930 retry.go:31] will retry after 363.270738ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo673223848/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 ssh "sudo umount -f /mount-9p": exit status 1 (185.750519ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-136601 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo673223848/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-136601 service list -o json: (1.254932172s)
functional_test.go:1504: Took "1.255037613s" to run "out/minikube-linux-amd64 -p functional-136601 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1127005228/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1127005228/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1127005228/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T" /mount1: exit status 1 (208.053062ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 23:18:28.980473  748930 retry.go:31] will retry after 450.906757ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-136601 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1127005228/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1127005228/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136601 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1127005228/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.20:30288
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 version -o=json --components
2025/12/08 23:18:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-136601 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.20:30288
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-136601
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-136601
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-136601
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1208 23:19:18.116898  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:18.123394  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:18.134930  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:18.156471  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:18.197967  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:18.279258  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:18.440872  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:18.763095  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:19.404407  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:20.685976  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:23.247684  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:28.369654  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:38.611294  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:19:59.092685  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:20:40.054645  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:21:12.275623  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m18.364893131s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (198.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 kubectl -- rollout status deployment/busybox: (4.853377874s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-hmknx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-ttnm8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-xhcnc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-hmknx -- nslookup kubernetes.default
E1208 23:22:01.976223  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-ttnm8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-xhcnc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-hmknx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-ttnm8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-xhcnc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-hmknx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-hmknx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-ttnm8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-ttnm8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-xhcnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 kubectl -- exec busybox-7b57f96db7-xhcnc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 node add --alsologtostderr -v 5: (45.482189093s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5
E1208 23:22:50.380050  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:22:50.386710  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:22:50.398214  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:22:50.420620  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:22:50.462174  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:22:50.543743  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:22:50.705921  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-552216 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1208 23:22:51.027677  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --output json --alsologtostderr -v 5
E1208 23:22:51.669921  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp testdata/cp-test.txt ha-552216:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1735512329/001/cp-test_ha-552216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216 "sudo cat /home/docker/cp-test.txt"
E1208 23:22:52.951329  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216:/home/docker/cp-test.txt ha-552216-m02:/home/docker/cp-test_ha-552216_ha-552216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m02 "sudo cat /home/docker/cp-test_ha-552216_ha-552216-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216:/home/docker/cp-test.txt ha-552216-m03:/home/docker/cp-test_ha-552216_ha-552216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m03 "sudo cat /home/docker/cp-test_ha-552216_ha-552216-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216:/home/docker/cp-test.txt ha-552216-m04:/home/docker/cp-test_ha-552216_ha-552216-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m04 "sudo cat /home/docker/cp-test_ha-552216_ha-552216-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp testdata/cp-test.txt ha-552216-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1735512329/001/cp-test_ha-552216-m02.txt
E1208 23:22:55.513564  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m02:/home/docker/cp-test.txt ha-552216:/home/docker/cp-test_ha-552216-m02_ha-552216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216 "sudo cat /home/docker/cp-test_ha-552216-m02_ha-552216.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m02:/home/docker/cp-test.txt ha-552216-m03:/home/docker/cp-test_ha-552216-m02_ha-552216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m03 "sudo cat /home/docker/cp-test_ha-552216-m02_ha-552216-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m02:/home/docker/cp-test.txt ha-552216-m04:/home/docker/cp-test_ha-552216-m02_ha-552216-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m04 "sudo cat /home/docker/cp-test_ha-552216-m02_ha-552216-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp testdata/cp-test.txt ha-552216-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1735512329/001/cp-test_ha-552216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m03:/home/docker/cp-test.txt ha-552216:/home/docker/cp-test_ha-552216-m03_ha-552216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216 "sudo cat /home/docker/cp-test_ha-552216-m03_ha-552216.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m03:/home/docker/cp-test.txt ha-552216-m02:/home/docker/cp-test_ha-552216-m03_ha-552216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m02 "sudo cat /home/docker/cp-test_ha-552216-m03_ha-552216-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m03:/home/docker/cp-test.txt ha-552216-m04:/home/docker/cp-test_ha-552216-m03_ha-552216-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m04 "sudo cat /home/docker/cp-test_ha-552216-m03_ha-552216-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp testdata/cp-test.txt ha-552216-m04:/home/docker/cp-test.txt
E1208 23:23:00.635322  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1735512329/001/cp-test_ha-552216-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m04:/home/docker/cp-test.txt ha-552216:/home/docker/cp-test_ha-552216-m04_ha-552216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216 "sudo cat /home/docker/cp-test_ha-552216-m04_ha-552216.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m04:/home/docker/cp-test.txt ha-552216-m02:/home/docker/cp-test_ha-552216-m04_ha-552216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m02 "sudo cat /home/docker/cp-test_ha-552216-m04_ha-552216-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 cp ha-552216-m04:/home/docker/cp-test.txt ha-552216-m03:/home/docker/cp-test_ha-552216-m04_ha-552216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 ssh -n ha-552216-m03 "sudo cat /home/docker/cp-test_ha-552216-m04_ha-552216-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 node stop m02 --alsologtostderr -v 5
E1208 23:23:10.877672  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:23:31.359267  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:24:12.320741  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:24:18.117873  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 node stop m02 --alsologtostderr -v 5: (1m27.588703876s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5: exit status 7 (497.355109ms)

                                                
                                                
-- stdout --
	ha-552216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-552216-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-552216-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-552216-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 23:24:30.767874  761769 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:24:30.767975  761769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:24:30.767983  761769 out.go:374] Setting ErrFile to fd 2...
	I1208 23:24:30.767987  761769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:24:30.768198  761769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:24:30.768381  761769 out.go:368] Setting JSON to false
	I1208 23:24:30.768404  761769 mustload.go:66] Loading cluster: ha-552216
	I1208 23:24:30.768544  761769 notify.go:221] Checking for updates...
	I1208 23:24:30.768712  761769 config.go:182] Loaded profile config "ha-552216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:24:30.768728  761769 status.go:174] checking status of ha-552216 ...
	I1208 23:24:30.770810  761769 status.go:371] ha-552216 host status = "Running" (err=<nil>)
	I1208 23:24:30.770833  761769 host.go:66] Checking if "ha-552216" exists ...
	I1208 23:24:30.773451  761769 main.go:143] libmachine: domain ha-552216 has defined MAC address 52:54:00:d9:a5:79 in network mk-ha-552216
	I1208 23:24:30.773962  761769 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:79", ip: ""} in network mk-ha-552216: {Iface:virbr1 ExpiryTime:2025-12-09 00:18:51 +0000 UTC Type:0 Mac:52:54:00:d9:a5:79 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-552216 Clientid:01:52:54:00:d9:a5:79}
	I1208 23:24:30.773997  761769 main.go:143] libmachine: domain ha-552216 has defined IP address 192.168.39.125 and MAC address 52:54:00:d9:a5:79 in network mk-ha-552216
	I1208 23:24:30.774157  761769 host.go:66] Checking if "ha-552216" exists ...
	I1208 23:24:30.774411  761769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 23:24:30.776717  761769 main.go:143] libmachine: domain ha-552216 has defined MAC address 52:54:00:d9:a5:79 in network mk-ha-552216
	I1208 23:24:30.777090  761769 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:79", ip: ""} in network mk-ha-552216: {Iface:virbr1 ExpiryTime:2025-12-09 00:18:51 +0000 UTC Type:0 Mac:52:54:00:d9:a5:79 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-552216 Clientid:01:52:54:00:d9:a5:79}
	I1208 23:24:30.777125  761769 main.go:143] libmachine: domain ha-552216 has defined IP address 192.168.39.125 and MAC address 52:54:00:d9:a5:79 in network mk-ha-552216
	I1208 23:24:30.777302  761769 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/ha-552216/id_rsa Username:docker}
	I1208 23:24:30.863832  761769 ssh_runner.go:195] Run: systemctl --version
	I1208 23:24:30.871420  761769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 23:24:30.891794  761769 kubeconfig.go:125] found "ha-552216" server: "https://192.168.39.254:8443"
	I1208 23:24:30.891829  761769 api_server.go:166] Checking apiserver status ...
	I1208 23:24:30.891862  761769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:24:30.915290  761769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1383/cgroup
	W1208 23:24:30.926615  761769 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1383/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1208 23:24:30.926663  761769 ssh_runner.go:195] Run: ls
	I1208 23:24:30.931727  761769 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1208 23:24:30.937537  761769 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1208 23:24:30.937559  761769 status.go:463] ha-552216 apiserver status = Running (err=<nil>)
	I1208 23:24:30.937568  761769 status.go:176] ha-552216 status: &{Name:ha-552216 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 23:24:30.937584  761769 status.go:174] checking status of ha-552216-m02 ...
	I1208 23:24:30.939108  761769 status.go:371] ha-552216-m02 host status = "Stopped" (err=<nil>)
	I1208 23:24:30.939134  761769 status.go:384] host is not running, skipping remaining checks
	I1208 23:24:30.939141  761769 status.go:176] ha-552216-m02 status: &{Name:ha-552216-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 23:24:30.939162  761769 status.go:174] checking status of ha-552216-m03 ...
	I1208 23:24:30.940294  761769 status.go:371] ha-552216-m03 host status = "Running" (err=<nil>)
	I1208 23:24:30.940312  761769 host.go:66] Checking if "ha-552216-m03" exists ...
	I1208 23:24:30.942603  761769 main.go:143] libmachine: domain ha-552216-m03 has defined MAC address 52:54:00:ed:4d:fe in network mk-ha-552216
	I1208 23:24:30.943009  761769 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ed:4d:fe", ip: ""} in network mk-ha-552216: {Iface:virbr1 ExpiryTime:2025-12-09 00:20:49 +0000 UTC Type:0 Mac:52:54:00:ed:4d:fe Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-552216-m03 Clientid:01:52:54:00:ed:4d:fe}
	I1208 23:24:30.943036  761769 main.go:143] libmachine: domain ha-552216-m03 has defined IP address 192.168.39.110 and MAC address 52:54:00:ed:4d:fe in network mk-ha-552216
	I1208 23:24:30.943174  761769 host.go:66] Checking if "ha-552216-m03" exists ...
	I1208 23:24:30.943387  761769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 23:24:30.945252  761769 main.go:143] libmachine: domain ha-552216-m03 has defined MAC address 52:54:00:ed:4d:fe in network mk-ha-552216
	I1208 23:24:30.945680  761769 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ed:4d:fe", ip: ""} in network mk-ha-552216: {Iface:virbr1 ExpiryTime:2025-12-09 00:20:49 +0000 UTC Type:0 Mac:52:54:00:ed:4d:fe Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-552216-m03 Clientid:01:52:54:00:ed:4d:fe}
	I1208 23:24:30.945719  761769 main.go:143] libmachine: domain ha-552216-m03 has defined IP address 192.168.39.110 and MAC address 52:54:00:ed:4d:fe in network mk-ha-552216
	I1208 23:24:30.945900  761769 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/ha-552216-m03/id_rsa Username:docker}
	I1208 23:24:31.033098  761769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 23:24:31.051427  761769 kubeconfig.go:125] found "ha-552216" server: "https://192.168.39.254:8443"
	I1208 23:24:31.051459  761769 api_server.go:166] Checking apiserver status ...
	I1208 23:24:31.051497  761769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:24:31.071409  761769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1797/cgroup
	W1208 23:24:31.084992  761769 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1797/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1208 23:24:31.085040  761769 ssh_runner.go:195] Run: ls
	I1208 23:24:31.090006  761769 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1208 23:24:31.094913  761769 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1208 23:24:31.094934  761769 status.go:463] ha-552216-m03 apiserver status = Running (err=<nil>)
	I1208 23:24:31.094943  761769 status.go:176] ha-552216-m03 status: &{Name:ha-552216-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 23:24:31.094958  761769 status.go:174] checking status of ha-552216-m04 ...
	I1208 23:24:31.096638  761769 status.go:371] ha-552216-m04 host status = "Running" (err=<nil>)
	I1208 23:24:31.096661  761769 host.go:66] Checking if "ha-552216-m04" exists ...
	I1208 23:24:31.099401  761769 main.go:143] libmachine: domain ha-552216-m04 has defined MAC address 52:54:00:61:ba:4d in network mk-ha-552216
	I1208 23:24:31.099830  761769 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ba:4d", ip: ""} in network mk-ha-552216: {Iface:virbr1 ExpiryTime:2025-12-09 00:22:21 +0000 UTC Type:0 Mac:52:54:00:61:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-552216-m04 Clientid:01:52:54:00:61:ba:4d}
	I1208 23:24:31.099858  761769 main.go:143] libmachine: domain ha-552216-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:61:ba:4d in network mk-ha-552216
	I1208 23:24:31.100012  761769 host.go:66] Checking if "ha-552216-m04" exists ...
	I1208 23:24:31.100274  761769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 23:24:31.102565  761769 main.go:143] libmachine: domain ha-552216-m04 has defined MAC address 52:54:00:61:ba:4d in network mk-ha-552216
	I1208 23:24:31.102966  761769 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ba:4d", ip: ""} in network mk-ha-552216: {Iface:virbr1 ExpiryTime:2025-12-09 00:22:21 +0000 UTC Type:0 Mac:52:54:00:61:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-552216-m04 Clientid:01:52:54:00:61:ba:4d}
	I1208 23:24:31.102988  761769 main.go:143] libmachine: domain ha-552216-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:61:ba:4d in network mk-ha-552216
	I1208 23:24:31.103145  761769 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/ha-552216-m04/id_rsa Username:docker}
	I1208 23:24:31.185874  761769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 23:24:31.201682  761769 status.go:176] ha-552216-m04 status: &{Name:ha-552216-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 node start m02 --alsologtostderr -v 5
E1208 23:24:45.818632  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 node start m02 --alsologtostderr -v 5: (30.361793614s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (353.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 stop --alsologtostderr -v 5
E1208 23:25:34.243620  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:26:12.274928  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:27:35.349626  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:27:50.380294  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:28:18.088949  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 stop --alsologtostderr -v 5: (4m2.206413801s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 start --wait true --alsologtostderr -v 5
E1208 23:29:18.117474  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 start --wait true --alsologtostderr -v 5: (1m51.511799651s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (353.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 node delete m03 --alsologtostderr -v 5
E1208 23:31:12.274085  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 node delete m03 --alsologtostderr -v 5: (17.623231364s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (263.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 stop --alsologtostderr -v 5
E1208 23:32:50.379687  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:34:18.117592  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 stop --alsologtostderr -v 5: (4m23.912524734s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5: exit status 7 (69.010357ms)

                                                
                                                
-- stdout --
	ha-552216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-552216-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-552216-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 23:35:40.555989  765063 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:35:40.556323  765063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:35:40.556335  765063 out.go:374] Setting ErrFile to fd 2...
	I1208 23:35:40.556339  765063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:35:40.556534  765063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:35:40.556709  765063 out.go:368] Setting JSON to false
	I1208 23:35:40.556739  765063 mustload.go:66] Loading cluster: ha-552216
	I1208 23:35:40.556864  765063 notify.go:221] Checking for updates...
	I1208 23:35:40.557075  765063 config.go:182] Loaded profile config "ha-552216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:35:40.557092  765063 status.go:174] checking status of ha-552216 ...
	I1208 23:35:40.559219  765063 status.go:371] ha-552216 host status = "Stopped" (err=<nil>)
	I1208 23:35:40.559235  765063 status.go:384] host is not running, skipping remaining checks
	I1208 23:35:40.559240  765063 status.go:176] ha-552216 status: &{Name:ha-552216 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 23:35:40.559257  765063 status.go:174] checking status of ha-552216-m02 ...
	I1208 23:35:40.560411  765063 status.go:371] ha-552216-m02 host status = "Stopped" (err=<nil>)
	I1208 23:35:40.560424  765063 status.go:384] host is not running, skipping remaining checks
	I1208 23:35:40.560428  765063 status.go:176] ha-552216-m02 status: &{Name:ha-552216-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 23:35:40.560451  765063 status.go:174] checking status of ha-552216-m04 ...
	I1208 23:35:40.561561  765063 status.go:371] ha-552216-m04 host status = "Stopped" (err=<nil>)
	I1208 23:35:40.561579  765063 status.go:384] host is not running, skipping remaining checks
	I1208 23:35:40.561600  765063 status.go:176] ha-552216-m04 status: &{Name:ha-552216-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (263.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1208 23:35:41.180583  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:36:12.274608  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m39.820301816s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 node add --control-plane --alsologtostderr -v 5
E1208 23:37:50.380971  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-552216 node add --control-plane --alsologtostderr -v 5: (1m6.032428751s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-552216 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-378774 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1208 23:39:13.450835  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:39:18.118821  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-378774 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.726271779s)
--- PASS: TestJSONOutput/start/Command (77.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-378774 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-378774 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.25s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-378774 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-378774 --output=json --user=testUser: (7.248872688s)
--- PASS: TestJSONOutput/stop/Command (7.25s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-180919 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-180919 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.961624ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f9c183e4-2da5-4f7c-82af-f8bb8c2e4744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-180919] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e26aacea-ba6c-4eaf-9fbc-3ca4d7391eee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22075"}}
	{"specversion":"1.0","id":"fdf64d3e-653d-4c15-94a8-37bdb6ba7138","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d315d97-93fe-4aab-8e96-e3ab193efe85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig"}}
	{"specversion":"1.0","id":"0d9809e6-ea4a-45f5-937f-2a7c39b6b7d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube"}}
	{"specversion":"1.0","id":"5a61dc05-05df-40d6-9a00-5382460e4671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b752d5c1-43ad-4ab0-a1f4-3922a455abf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f7fc270b-9572-4d74-aa7d-bab8acafb678","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-180919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-180919
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-215432 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-215432 --driver=kvm2  --container-runtime=crio: (36.447934752s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-217485 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-217485 --driver=kvm2  --container-runtime=crio: (34.377934194s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-215432
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-217485
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-217485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-217485
helpers_test.go:175: Cleaning up "first-215432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-215432
E1208 23:41:12.274294  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMinikubeProfile (73.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-263851 --memory=3072 --mount-string /tmp/TestMountStartserial3810136064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-263851 --memory=3072 --mount-string /tmp/TestMountStartserial3810136064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.421575709s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-263851 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-263851 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-284935 --memory=3072 --mount-string /tmp/TestMountStartserial3810136064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-284935 --memory=3072 --mount-string /tmp/TestMountStartserial3810136064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.127016151s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284935 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284935 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-263851 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284935 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284935 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-284935
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-284935: (1.257724783s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-284935
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-284935: (17.515433559s)
--- PASS: TestMountStart/serial/RestartStopped (18.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284935 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-284935 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-823416 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1208 23:42:50.379857  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-823416 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m34.619359292s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-823416 -- rollout status deployment/busybox: (4.431851023s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-5pvmm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-ksg96 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-5pvmm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-ksg96 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-5pvmm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-ksg96 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-5pvmm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-5pvmm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-ksg96 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-823416 -- exec busybox-7b57f96db7-ksg96 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-823416 -v=5 --alsologtostderr
E1208 23:44:15.351664  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:44:18.117090  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-823416 -v=5 --alsologtostderr: (42.574927935s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-823416 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp testdata/cp-test.txt multinode-823416:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1123289611/001/cp-test_multinode-823416.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416:/home/docker/cp-test.txt multinode-823416-m02:/home/docker/cp-test_multinode-823416_multinode-823416-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m02 "sudo cat /home/docker/cp-test_multinode-823416_multinode-823416-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416:/home/docker/cp-test.txt multinode-823416-m03:/home/docker/cp-test_multinode-823416_multinode-823416-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m03 "sudo cat /home/docker/cp-test_multinode-823416_multinode-823416-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp testdata/cp-test.txt multinode-823416-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1123289611/001/cp-test_multinode-823416-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416-m02:/home/docker/cp-test.txt multinode-823416:/home/docker/cp-test_multinode-823416-m02_multinode-823416.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416 "sudo cat /home/docker/cp-test_multinode-823416-m02_multinode-823416.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416-m02:/home/docker/cp-test.txt multinode-823416-m03:/home/docker/cp-test_multinode-823416-m02_multinode-823416-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m03 "sudo cat /home/docker/cp-test_multinode-823416-m02_multinode-823416-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp testdata/cp-test.txt multinode-823416-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1123289611/001/cp-test_multinode-823416-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416-m03:/home/docker/cp-test.txt multinode-823416:/home/docker/cp-test_multinode-823416-m03_multinode-823416.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416 "sudo cat /home/docker/cp-test_multinode-823416-m03_multinode-823416.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 cp multinode-823416-m03:/home/docker/cp-test.txt multinode-823416-m02:/home/docker/cp-test_multinode-823416-m03_multinode-823416-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 ssh -n multinode-823416-m02 "sudo cat /home/docker/cp-test_multinode-823416-m03_multinode-823416-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-823416 node stop m03: (1.714369374s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-823416 status: exit status 7 (316.079772ms)

                                                
                                                
-- stdout --
	multinode-823416
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-823416-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-823416-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-823416 status --alsologtostderr: exit status 7 (324.934148ms)

                                                
                                                
-- stdout --
	multinode-823416
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-823416-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-823416-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 23:44:47.877498  770579 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:44:47.877764  770579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:44:47.877772  770579 out.go:374] Setting ErrFile to fd 2...
	I1208 23:44:47.877777  770579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:44:47.877982  770579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:44:47.878153  770579 out.go:368] Setting JSON to false
	I1208 23:44:47.878181  770579 mustload.go:66] Loading cluster: multinode-823416
	I1208 23:44:47.878337  770579 notify.go:221] Checking for updates...
	I1208 23:44:47.878536  770579 config.go:182] Loaded profile config "multinode-823416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:44:47.878551  770579 status.go:174] checking status of multinode-823416 ...
	I1208 23:44:47.880693  770579 status.go:371] multinode-823416 host status = "Running" (err=<nil>)
	I1208 23:44:47.880715  770579 host.go:66] Checking if "multinode-823416" exists ...
	I1208 23:44:47.883319  770579 main.go:143] libmachine: domain multinode-823416 has defined MAC address 52:54:00:70:af:32 in network mk-multinode-823416
	I1208 23:44:47.883788  770579 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:af:32", ip: ""} in network mk-multinode-823416: {Iface:virbr1 ExpiryTime:2025-12-09 00:42:28 +0000 UTC Type:0 Mac:52:54:00:70:af:32 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:multinode-823416 Clientid:01:52:54:00:70:af:32}
	I1208 23:44:47.883819  770579 main.go:143] libmachine: domain multinode-823416 has defined IP address 192.168.39.151 and MAC address 52:54:00:70:af:32 in network mk-multinode-823416
	I1208 23:44:47.883969  770579 host.go:66] Checking if "multinode-823416" exists ...
	I1208 23:44:47.884210  770579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 23:44:47.886602  770579 main.go:143] libmachine: domain multinode-823416 has defined MAC address 52:54:00:70:af:32 in network mk-multinode-823416
	I1208 23:44:47.887007  770579 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:af:32", ip: ""} in network mk-multinode-823416: {Iface:virbr1 ExpiryTime:2025-12-09 00:42:28 +0000 UTC Type:0 Mac:52:54:00:70:af:32 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:multinode-823416 Clientid:01:52:54:00:70:af:32}
	I1208 23:44:47.887035  770579 main.go:143] libmachine: domain multinode-823416 has defined IP address 192.168.39.151 and MAC address 52:54:00:70:af:32 in network mk-multinode-823416
	I1208 23:44:47.887229  770579 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/multinode-823416/id_rsa Username:docker}
	I1208 23:44:47.970491  770579 ssh_runner.go:195] Run: systemctl --version
	I1208 23:44:47.976822  770579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 23:44:47.994450  770579 kubeconfig.go:125] found "multinode-823416" server: "https://192.168.39.151:8443"
	I1208 23:44:47.994492  770579 api_server.go:166] Checking apiserver status ...
	I1208 23:44:47.994551  770579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 23:44:48.015524  770579 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W1208 23:44:48.026815  770579 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1208 23:44:48.026873  770579 ssh_runner.go:195] Run: ls
	I1208 23:44:48.031493  770579 api_server.go:253] Checking apiserver healthz at https://192.168.39.151:8443/healthz ...
	I1208 23:44:48.036025  770579 api_server.go:279] https://192.168.39.151:8443/healthz returned 200:
	ok
	I1208 23:44:48.036050  770579 status.go:463] multinode-823416 apiserver status = Running (err=<nil>)
	I1208 23:44:48.036059  770579 status.go:176] multinode-823416 status: &{Name:multinode-823416 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 23:44:48.036076  770579 status.go:174] checking status of multinode-823416-m02 ...
	I1208 23:44:48.037730  770579 status.go:371] multinode-823416-m02 host status = "Running" (err=<nil>)
	I1208 23:44:48.037754  770579 host.go:66] Checking if "multinode-823416-m02" exists ...
	I1208 23:44:48.040236  770579 main.go:143] libmachine: domain multinode-823416-m02 has defined MAC address 52:54:00:21:e1:5d in network mk-multinode-823416
	I1208 23:44:48.040657  770579 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:e1:5d", ip: ""} in network mk-multinode-823416: {Iface:virbr1 ExpiryTime:2025-12-09 00:43:20 +0000 UTC Type:0 Mac:52:54:00:21:e1:5d Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-823416-m02 Clientid:01:52:54:00:21:e1:5d}
	I1208 23:44:48.040692  770579 main.go:143] libmachine: domain multinode-823416-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:21:e1:5d in network mk-multinode-823416
	I1208 23:44:48.040835  770579 host.go:66] Checking if "multinode-823416-m02" exists ...
	I1208 23:44:48.041084  770579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 23:44:48.043195  770579 main.go:143] libmachine: domain multinode-823416-m02 has defined MAC address 52:54:00:21:e1:5d in network mk-multinode-823416
	I1208 23:44:48.043575  770579 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:e1:5d", ip: ""} in network mk-multinode-823416: {Iface:virbr1 ExpiryTime:2025-12-09 00:43:20 +0000 UTC Type:0 Mac:52:54:00:21:e1:5d Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-823416-m02 Clientid:01:52:54:00:21:e1:5d}
	I1208 23:44:48.043596  770579 main.go:143] libmachine: domain multinode-823416-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:21:e1:5d in network mk-multinode-823416
	I1208 23:44:48.043734  770579 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22075-744871/.minikube/machines/multinode-823416-m02/id_rsa Username:docker}
	I1208 23:44:48.123993  770579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 23:44:48.139131  770579 status.go:176] multinode-823416-m02 status: &{Name:multinode-823416-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1208 23:44:48.139181  770579 status.go:174] checking status of multinode-823416-m03 ...
	I1208 23:44:48.140945  770579 status.go:371] multinode-823416-m03 host status = "Stopped" (err=<nil>)
	I1208 23:44:48.140964  770579 status.go:384] host is not running, skipping remaining checks
	I1208 23:44:48.140970  770579 status.go:176] multinode-823416-m03 status: &{Name:multinode-823416-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-823416 node start m03 -v=5 --alsologtostderr: (38.345178061s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (272.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-823416
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-823416
E1208 23:46:12.275017  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:47:50.380706  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-823416: (2m32.812913324s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-823416 --wait=true -v=5 --alsologtostderr
E1208 23:49:18.117406  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-823416 --wait=true -v=5 --alsologtostderr: (1m59.435530116s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-823416
--- PASS: TestMultiNode/serial/RestartKeepsNodes (272.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-823416 node delete m03: (2.21793678s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 stop
E1208 23:51:12.274864  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 23:52:21.182939  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-823416 stop: (2m23.76827424s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-823416 status: exit status 7 (65.852173ms)

                                                
                                                
-- stdout --
	multinode-823416
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-823416-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-823416 status --alsologtostderr: exit status 7 (63.785547ms)

                                                
                                                
-- stdout --
	multinode-823416
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-823416-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 23:52:25.981392  772847 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:52:25.981491  772847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:52:25.981496  772847 out.go:374] Setting ErrFile to fd 2...
	I1208 23:52:25.981500  772847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:52:25.981696  772847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:52:25.981862  772847 out.go:368] Setting JSON to false
	I1208 23:52:25.981887  772847 mustload.go:66] Loading cluster: multinode-823416
	I1208 23:52:25.982028  772847 notify.go:221] Checking for updates...
	I1208 23:52:25.982233  772847 config.go:182] Loaded profile config "multinode-823416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:52:25.982246  772847 status.go:174] checking status of multinode-823416 ...
	I1208 23:52:25.984248  772847 status.go:371] multinode-823416 host status = "Stopped" (err=<nil>)
	I1208 23:52:25.984265  772847 status.go:384] host is not running, skipping remaining checks
	I1208 23:52:25.984272  772847 status.go:176] multinode-823416 status: &{Name:multinode-823416 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 23:52:25.984293  772847 status.go:174] checking status of multinode-823416-m02 ...
	I1208 23:52:25.985464  772847 status.go:371] multinode-823416-m02 host status = "Stopped" (err=<nil>)
	I1208 23:52:25.985478  772847 status.go:384] host is not running, skipping remaining checks
	I1208 23:52:25.985483  772847 status.go:176] multinode-823416-m02 status: &{Name:multinode-823416-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (143.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (111.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-823416 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1208 23:52:50.379847  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-823416 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.404402776s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-823416 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (111.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-823416
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-823416-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-823416-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (78.555453ms)

                                                
                                                
-- stdout --
	* [multinode-823416-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-823416-m02' is duplicated with machine name 'multinode-823416-m02' in profile 'multinode-823416'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-823416-m03 --driver=kvm2  --container-runtime=crio
E1208 23:54:18.117067  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-823416-m03 --driver=kvm2  --container-runtime=crio: (38.219573584s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-823416
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-823416: exit status 80 (215.827705ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-823416 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-823416-m03 already exists in multinode-823416-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-823416-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.42s)

                                                
                                    
x
+
TestScheduledStopUnix (107.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-309652 --memory=3072 --driver=kvm2  --container-runtime=crio
E1208 23:57:50.381025  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-309652 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.502236512s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-309652 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 23:57:57.083846  775208 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:57:57.083982  775208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:57:57.083994  775208 out.go:374] Setting ErrFile to fd 2...
	I1208 23:57:57.083999  775208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:57:57.084210  775208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:57:57.084506  775208 out.go:368] Setting JSON to false
	I1208 23:57:57.084621  775208 mustload.go:66] Loading cluster: scheduled-stop-309652
	I1208 23:57:57.084960  775208 config.go:182] Loaded profile config "scheduled-stop-309652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:57:57.085058  775208 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/config.json ...
	I1208 23:57:57.085269  775208 mustload.go:66] Loading cluster: scheduled-stop-309652
	I1208 23:57:57.085410  775208 config.go:182] Loaded profile config "scheduled-stop-309652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-309652 -n scheduled-stop-309652
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-309652 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 23:57:57.375764  775253 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:57:57.375896  775253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:57:57.375909  775253 out.go:374] Setting ErrFile to fd 2...
	I1208 23:57:57.375936  775253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:57:57.376133  775253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:57:57.376443  775253 out.go:368] Setting JSON to false
	I1208 23:57:57.376657  775253 daemonize_unix.go:73] killing process 775242 as it is an old scheduled stop
	I1208 23:57:57.376764  775253 mustload.go:66] Loading cluster: scheduled-stop-309652
	I1208 23:57:57.377128  775253 config.go:182] Loaded profile config "scheduled-stop-309652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:57:57.377196  775253 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/config.json ...
	I1208 23:57:57.377408  775253 mustload.go:66] Loading cluster: scheduled-stop-309652
	I1208 23:57:57.377518  775253 config.go:182] Loaded profile config "scheduled-stop-309652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1208 23:57:57.383708  748930 retry.go:31] will retry after 105.182µs: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.384876  748930 retry.go:31] will retry after 105.954µs: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.386011  748930 retry.go:31] will retry after 234.413µs: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.387149  748930 retry.go:31] will retry after 400.864µs: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.388298  748930 retry.go:31] will retry after 745.801µs: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.389422  748930 retry.go:31] will retry after 1.1216ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.391617  748930 retry.go:31] will retry after 1.030951ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.392743  748930 retry.go:31] will retry after 2.015951ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.394924  748930 retry.go:31] will retry after 2.405576ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.398121  748930 retry.go:31] will retry after 5.71313ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.404334  748930 retry.go:31] will retry after 3.985861ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.408542  748930 retry.go:31] will retry after 6.597557ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.415765  748930 retry.go:31] will retry after 15.466848ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.431996  748930 retry.go:31] will retry after 25.468411ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.458297  748930 retry.go:31] will retry after 33.626008ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
I1208 23:57:57.492591  748930 retry.go:31] will retry after 65.585853ms: open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-309652 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-309652 -n scheduled-stop-309652
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-309652
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-309652 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 23:58:23.132711  775403 out.go:360] Setting OutFile to fd 1 ...
	I1208 23:58:23.132828  775403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:58:23.132841  775403 out.go:374] Setting ErrFile to fd 2...
	I1208 23:58:23.132848  775403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 23:58:23.133097  775403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1208 23:58:23.133347  775403 out.go:368] Setting JSON to false
	I1208 23:58:23.133456  775403 mustload.go:66] Loading cluster: scheduled-stop-309652
	I1208 23:58:23.133769  775403 config.go:182] Loaded profile config "scheduled-stop-309652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 23:58:23.133850  775403 profile.go:143] Saving config to /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/scheduled-stop-309652/config.json ...
	I1208 23:58:23.134064  775403 mustload.go:66] Loading cluster: scheduled-stop-309652
	I1208 23:58:23.134165  775403 config.go:182] Loaded profile config "scheduled-stop-309652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-309652
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-309652: exit status 7 (63.811194ms)

                                                
                                                
-- stdout --
	scheduled-stop-309652
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-309652 -n scheduled-stop-309652
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-309652 -n scheduled-stop-309652: exit status 7 (61.660155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-309652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-309652
--- PASS: TestScheduledStopUnix (107.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (121.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3397088270 start -p running-upgrade-453749 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3397088270 start -p running-upgrade-453749 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m2.139352978s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-453749 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-453749 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.494720006s)
helpers_test.go:175: Cleaning up "running-upgrade-453749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-453749
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-453749: (1.147921282s)
--- PASS: TestRunningBinaryUpgrade (121.69s)

                                                
                                    
x
+
TestKubernetesUpgrade (145.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.071558876s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-769581
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-769581: (1.873123428s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-769581 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-769581 status --format={{.Host}}: exit status 7 (67.900943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.419411563s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-769581 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.928492ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-769581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-769581
	    minikube start -p kubernetes-upgrade-769581 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7695812 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-769581 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-769581 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.147972408s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-769581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-769581
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-769581: (1.238094292s)
--- PASS: TestKubernetesUpgrade (145.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903560 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-903560 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (105.878312ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-903560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903560 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1208 23:59:18.117592  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903560 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m37.639915096s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-903560 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-474683 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-474683 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (129.51863ms)

                                                
                                                
-- stdout --
	* [false-474683] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22075
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 00:00:33.897587  777646 out.go:360] Setting OutFile to fd 1 ...
	I1209 00:00:33.897897  777646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 00:00:33.897909  777646 out.go:374] Setting ErrFile to fd 2...
	I1209 00:00:33.897914  777646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 00:00:33.898114  777646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22075-744871/.minikube/bin
	I1209 00:00:33.898660  777646 out.go:368] Setting JSON to false
	I1209 00:00:33.899638  777646 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9774,"bootTime":1765228660,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 00:00:33.899696  777646 start.go:143] virtualization: kvm guest
	I1209 00:00:33.901721  777646 out.go:179] * [false-474683] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 00:00:33.903109  777646 out.go:179]   - MINIKUBE_LOCATION=22075
	I1209 00:00:33.903151  777646 notify.go:221] Checking for updates...
	I1209 00:00:33.905822  777646 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 00:00:33.906987  777646 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22075-744871/kubeconfig
	I1209 00:00:33.908150  777646 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22075-744871/.minikube
	I1209 00:00:33.909385  777646 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 00:00:33.910587  777646 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 00:00:33.912770  777646 config.go:182] Loaded profile config "NoKubernetes-903560": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:00:33.912917  777646 config.go:182] Loaded profile config "cert-expiration-134582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:00:33.913052  777646 config.go:182] Loaded profile config "cert-options-962577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 00:00:33.913208  777646 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 00:00:33.958389  777646 out.go:179] * Using the kvm2 driver based on user configuration
	I1209 00:00:33.959528  777646 start.go:309] selected driver: kvm2
	I1209 00:00:33.959546  777646 start.go:927] validating driver "kvm2" against <nil>
	I1209 00:00:33.959558  777646 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 00:00:33.961456  777646 out.go:203] 
	W1209 00:00:33.962558  777646 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1209 00:00:33.963793  777646 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-474683 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-474683" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-474683

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474683"

                                                
                                                
----------------------- debugLogs end: false-474683 [took: 4.415415364s] --------------------------------
helpers_test.go:175: Cleaning up "false-474683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-474683
--- PASS: TestNetworkPlugins/group/false (4.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903560 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1209 00:00:55.355473  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903560 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.523329494s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-903560 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-903560 status -o json: exit status 2 (223.853804ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-903560","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-903560
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-903560: (1.115464216s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903560 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903560 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (31.616793276s)
--- PASS: TestNoKubernetes/serial/Start (31.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22075-744871/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-903560 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-903560 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.742224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-903560
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-903560: (1.302216408s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (37.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-903560 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-903560 --driver=kvm2  --container-runtime=crio: (37.528794531s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (37.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-903560 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-903560 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.185808ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestISOImage/Setup (32.14s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-069000 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-069000 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.135372328s)
--- PASS: TestISOImage/Setup (32.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.765297693 start -p stopped-upgrade-316150 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1209 00:02:50.379569  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.765297693 start -p stopped-upgrade-316150 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (49.676655152s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.765297693 -p stopped-upgrade-316150 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.765297693 -p stopped-upgrade-316150 stop: (1.668112758s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-316150 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-316150 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.781892942s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (94.13s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.27s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.27s)

                                                
                                    
x
+
TestPause/serial/Start (114s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-165880 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-165880 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m54.001946168s)
--- PASS: TestPause/serial/Start (114.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m39.281228002s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-316150
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-316150: (1.143327136s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m13.314894835s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m36.003806011s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-474683 "pgrep -a kubelet"
I1209 00:05:33.434990  748930 config.go:182] Loaded profile config "auto-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-474683 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pl5kb" [f16eda42-dc0d-4f29-a1bc-2245b6fdfd3b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pl5kb" [f16eda42-dc0d-4f29-a1bc-2245b6fdfd3b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006232987s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ldvv4" [219f0456-ce5e-4ca6-9120-f2da88e251f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005896332s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-474683 "pgrep -a kubelet"
I1209 00:05:43.902307  748930 config.go:182] Loaded profile config "kindnet-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-474683 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5qd5z" [076ec88e-0e39-4d24-a0fb-3240a3a44641] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5qd5z" [076ec88e-0e39-4d24-a0fb-3240a3a44641] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005504532s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-474683 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-474683 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m15.084337754s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m36.700735777s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1209 00:06:12.274803  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m36.581752126s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-z9wws" [55922094-4504-4743-ba93-35e542e6a15b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004166648s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-474683 "pgrep -a kubelet"
I1209 00:06:36.189823  748930 config.go:182] Loaded profile config "calico-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-474683 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bbmzr" [62a761e3-eb98-4a36-a8ee-d08db82cfb48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bbmzr" [62a761e3-eb98-4a36-a8ee-d08db82cfb48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.505075078s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-474683 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-474683 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m30.066774492s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-474683 "pgrep -a kubelet"
I1209 00:07:14.468235  748930 config.go:182] Loaded profile config "custom-flannel-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-474683 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wd4tl" [0f6d37cc-597b-4be3-8119-88be4316dc90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wd4tl" [0f6d37cc-597b-4be3-8119-88be4316dc90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004447629s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-474683 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-474683 "pgrep -a kubelet"
I1209 00:07:39.200459  748930 config.go:182] Loaded profile config "enable-default-cni-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-474683 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-npf9r" [717c5195-fe42-499d-9cb7-cecb1d7d65e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-npf9r" [717c5195-fe42-499d-9cb7-cecb1d7d65e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004062785s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (92.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-513870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-513870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m32.407796221s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (92.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-wjbxp" [04e07f8f-4c0b-46c2-aa81-02f11a601421] Running
E1209 00:07:50.380133  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006148231s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-474683 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-474683 "pgrep -a kubelet"
I1209 00:07:53.518149  748930 config.go:182] Loaded profile config "flannel-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-474683 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g87gt" [38e9deb0-3ae9-473d-8642-21c76241cded] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g87gt" [38e9deb0-3ae9-473d-8642-21c76241cded] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005035606s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-474683 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-480392 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-480392 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m11.237949266s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-863187 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-863187 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m5.26973661s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-474683 "pgrep -a kubelet"
I1209 00:08:35.422028  748930 config.go:182] Loaded profile config "bridge-474683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-474683 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gclh6" [46fbd973-7c30-47b0-829c-c298b6ad0ce7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gclh6" [46fbd973-7c30-47b0-829c-c298b6ad0ce7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.00543126s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-474683 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-474683 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-044107 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-044107 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m27.874123786s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-513870 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bb7691b8-489e-4e3c-a0ef-325c9d32dd12] Pending
helpers_test.go:352: "busybox" [bb7691b8-489e-4e3c-a0ef-325c9d32dd12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bb7691b8-489e-4e3c-a0ef-325c9d32dd12] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.007043805s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-513870 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-480392 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [75cfeed7-554c-426d-9c51-8644dcddd3ce] Pending
E1209 00:09:18.117198  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-944324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [75cfeed7-554c-426d-9c51-8644dcddd3ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [75cfeed7-554c-426d-9c51-8644dcddd3ce] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006892729s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-480392 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-863187 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [29762795-c334-4e28-9093-a04a14974701] Pending
helpers_test.go:352: "busybox" [29762795-c334-4e28-9093-a04a14974701] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [29762795-c334-4e28-9093-a04a14974701] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00445119s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-863187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-513870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-513870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.298656632s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-513870 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-480392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-480392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.165452528s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-480392 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (84.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-513870 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-513870 --alsologtostderr -v=3: (1m24.061779909s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (84.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (69.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-480392 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-480392 --alsologtostderr -v=3: (1m9.615622933s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (69.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-863187 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-863187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (88.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-863187 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-863187 --alsologtostderr -v=3: (1m28.295016594s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (88.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-044107 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fc225e8e-ad58-4a7d-9e8d-22703e194d58] Pending
helpers_test.go:352: "busybox" [fc225e8e-ad58-4a7d-9e8d-22703e194d58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1209 00:10:33.720756  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:33.727208  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:33.738609  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:33.760084  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:33.801578  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:33.883055  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:34.045414  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:34.367144  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:35.009245  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:36.290645  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [fc225e8e-ad58-4a7d-9e8d-22703e194d58] Running
E1209 00:10:37.702673  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:37.709141  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:37.720529  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:37.741945  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:37.783425  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:37.864926  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:38.026547  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:38.347868  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004505224s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-044107 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480392 -n no-preload-480392
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480392 -n no-preload-480392: exit status 7 (66.758322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-480392 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-480392 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1209 00:10:38.852189  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:38.989641  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:40.271792  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-480392 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (49.238547163s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480392 -n no-preload-480392
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-044107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1209 00:10:42.833791  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-044107 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-044107 --alsologtostderr -v=3
E1209 00:10:43.973974  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:47.955334  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-044107 --alsologtostderr -v=3: (1m27.486587489s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-513870 -n old-k8s-version-513870
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-513870 -n old-k8s-version-513870: exit status 7 (66.001559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-513870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-513870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1209 00:10:54.215767  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:10:58.196987  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-513870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (42.877289931s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-513870 -n old-k8s-version-513870
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863187 -n embed-certs-863187
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863187 -n embed-certs-863187: exit status 7 (79.694565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-863187 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-863187 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1209 00:11:12.273853  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/addons-192260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:14.697633  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:18.678725  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/kindnet-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-863187 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (54.133719109s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863187 -n embed-certs-863187
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-jjw2j" [069330bb-674f-4153-897e-bc4fb5d4802a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1209 00:11:29.996073  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:30.003779  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:30.016142  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:30.037737  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:30.079191  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:30.160758  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:30.322432  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-jjw2j" [069330bb-674f-4153-897e-bc4fb5d4802a] Running
E1209 00:11:30.644277  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:31.286452  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:32.567859  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:11:35.129932  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.005269282s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-t24bn" [6147a45a-0f1b-4b1c-b8d3-6d1dcc7291d7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-t24bn" [6147a45a-0f1b-4b1c-b8d3-6d1dcc7291d7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005636351s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-jjw2j" [069330bb-674f-4153-897e-bc4fb5d4802a] Running
E1209 00:11:40.252030  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005304742s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-480392 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480392 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-480392 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480392 -n no-preload-480392
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480392 -n no-preload-480392: exit status 2 (285.591431ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480392 -n no-preload-480392
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480392 -n no-preload-480392: exit status 2 (274.255067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-480392 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480392 -n no-preload-480392
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480392 -n no-preload-480392
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-217927 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-217927 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (44.292008537s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-t24bn" [6147a45a-0f1b-4b1c-b8d3-6d1dcc7291d7] Running
E1209 00:11:50.493732  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005915608s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-513870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-513870 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-513870 --alsologtostderr -v=1
E1209 00:11:55.659829  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/auto-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-513870 --alsologtostderr -v=1: (1.211265573s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-513870 -n old-k8s-version-513870
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-513870 -n old-k8s-version-513870: exit status 2 (241.816303ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-513870 -n old-k8s-version-513870
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-513870 -n old-k8s-version-513870: exit status 2 (262.379343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-513870 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-513870 -n old-k8s-version-513870
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-513870 -n old-k8s-version-513870
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.29s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.28s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.28s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-687l4" [d84e2178-4dfe-4b5f-893c-8900cea40056] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021420217s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.17s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1765151505-21409
iso_test.go:118:   kicbase_version: v0.0.48-1764843390-22032
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 0d7c1d9864cc7aa82e32494e32331ce8be405026
--- PASS: TestISOImage/VersionJSON (0.17s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-069000 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-687l4" [d84e2178-4dfe-4b5f-893c-8900cea40056] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005480304s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-863187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107: exit status 7 (80.420797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-044107 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-044107 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1209 00:12:10.978529  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-044107 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (45.958727665s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-863187 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-863187 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863187 -n embed-certs-863187
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863187 -n embed-certs-863187: exit status 2 (233.316289ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863187 -n embed-certs-863187
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863187 -n embed-certs-863187: exit status 2 (237.867507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-863187 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863187 -n embed-certs-863187
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863187 -n embed-certs-863187
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-217927 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-217927 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.116994069s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-217927 --alsologtostderr -v=3
E1209 00:12:33.453922  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:35.208020  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/custom-flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-217927 --alsologtostderr -v=3: (7.148776968s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-217927 -n newest-cni-217927
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-217927 -n newest-cni-217927: exit status 7 (68.048498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-217927 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-217927 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1209 00:12:39.455342  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:39.461771  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:39.473235  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:39.494705  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:39.536136  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:39.618260  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:39.779818  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:40.102205  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:40.744048  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:42.026141  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:44.587636  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:47.280849  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:47.287265  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:47.298707  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:47.320280  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:47.361789  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:47.443304  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:47.605008  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:47.927118  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:48.569310  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:49.709285  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:49.850900  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:50.379187  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/functional-136601/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:51.939995  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/calico-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:52.413030  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:55.689832  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/custom-flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-217927 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (31.627328034s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-217927 -n newest-cni-217927
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k7prd" [e0982ff0-686a-46f9-9b48-25e2434c9c35] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1209 00:12:57.535043  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 00:12:59.951265  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/enable-default-cni-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k7prd" [e0982ff0-686a-46f9-9b48-25e2434c9c35] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004430577s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k7prd" [e0982ff0-686a-46f9-9b48-25e2434c9c35] Running
E1209 00:13:07.776610  748930 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22075-744871/.minikube/profiles/flannel-474683/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005212487s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-044107 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-217927 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-217927 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-217927 --alsologtostderr -v=1: (1.137790543s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-217927 -n newest-cni-217927
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-217927 -n newest-cni-217927: exit status 2 (266.377211ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-217927 -n newest-cni-217927
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-217927 -n newest-cni-217927: exit status 2 (300.685443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-217927 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-217927 -n newest-cni-217927
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-217927 -n newest-cni-217927
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-044107 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-044107 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107: exit status 2 (240.911772ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107: exit status 2 (238.151148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-044107 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-044107 -n default-k8s-diff-port-044107
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.3
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
360 TestNetworkPlugins/group/kubenet 4.93
368 TestNetworkPlugins/group/cilium 4.03
397 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-192260 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-474683 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-474683" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-474683

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474683"

                                                
                                                
----------------------- debugLogs end: kubenet-474683 [took: 4.739080892s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-474683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-474683
--- SKIP: TestNetworkPlugins/group/kubenet (4.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-474683 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-474683" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-474683

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-474683" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474683"

                                                
                                                
----------------------- debugLogs end: cilium-474683 [took: 3.856243328s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-474683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-474683
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-862504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-862504
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard