Test Report: KVM_Linux_crio 21895

                    
                      382ea0a147905a9644676f66ab1ed2cbc8737b3b:2025-11-15:42335
                    
                

Test fail (2/351)

Order failed test Duration
37 TestAddons/parallel/Ingress 155.58
244 TestPreload 133.15
x
+
TestAddons/parallel/Ingress (155.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-663794 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-663794 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-663794 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2b013a75-814d-4176-8b62-830d8b345b7c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2b013a75-814d-4176-8b62-830d8b345b7c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004564729s
I1115 09:09:41.129832  247445 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-663794 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.203278805s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-663794 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.78
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-663794 -n addons-663794
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 logs -n 25: (1.240755586s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-071043                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-071043 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ 15 Nov 25 09:06 UTC │
	│ start   │ --download-only -p binary-mirror-042783 --alsologtostderr --binary-mirror http://127.0.0.1:43911 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-042783 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │                     │
	│ delete  │ -p binary-mirror-042783                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-042783 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ 15 Nov 25 09:06 UTC │
	│ addons  │ disable dashboard -p addons-663794                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │                     │
	│ addons  │ enable dashboard -p addons-663794                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │                     │
	│ start   │ -p addons-663794 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ 15 Nov 25 09:08 UTC │
	│ addons  │ addons-663794 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ addons  │ addons-663794 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ enable headlamp -p addons-663794 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ ip      │ addons-663794 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ ssh     │ addons-663794 ssh cat /opt/local-path-provisioner/pvc-7cb226ef-cf3e-40d0-abc8-3408242d700f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ ssh     │ addons-663794 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-663794                                                                                                                                                                                                                                                                                                                                                                                         │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
	│ addons  │ addons-663794 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:10 UTC │ 15 Nov 25 09:10 UTC │
	│ addons  │ addons-663794 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:10 UTC │ 15 Nov 25 09:10 UTC │
	│ ip      │ addons-663794 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-663794        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │ 15 Nov 25 09:11 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:06:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:06:41.546864  248107 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:06:41.546980  248107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:06:41.546991  248107 out.go:374] Setting ErrFile to fd 2...
	I1115 09:06:41.546997  248107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:06:41.547214  248107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:06:41.547812  248107 out.go:368] Setting JSON to false
	I1115 09:06:41.548747  248107 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6544,"bootTime":1763191058,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:06:41.548838  248107 start.go:143] virtualization: kvm guest
	I1115 09:06:41.550687  248107 out.go:179] * [addons-663794] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:06:41.552004  248107 notify.go:221] Checking for updates...
	I1115 09:06:41.552009  248107 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:06:41.553220  248107 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:06:41.554361  248107 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:06:41.555550  248107 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:06:41.556604  248107 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:06:41.557555  248107 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:06:41.558936  248107 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:06:41.588056  248107 out.go:179] * Using the kvm2 driver based on user configuration
	I1115 09:06:41.589094  248107 start.go:309] selected driver: kvm2
	I1115 09:06:41.589113  248107 start.go:930] validating driver "kvm2" against <nil>
	I1115 09:06:41.589135  248107 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:06:41.590145  248107 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:06:41.590489  248107 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:06:41.590529  248107 cni.go:84] Creating CNI manager for ""
	I1115 09:06:41.590590  248107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:06:41.590602  248107 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1115 09:06:41.590660  248107 start.go:353] cluster config:
	{Name:addons-663794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1115 09:06:41.590783  248107 iso.go:125] acquiring lock: {Name:mkff40ddaa37657d9e8283719561f1fce12069ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:06:41.592375  248107 out.go:179] * Starting "addons-663794" primary control-plane node in "addons-663794" cluster
	I1115 09:06:41.593483  248107 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:06:41.593517  248107 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:06:41.593546  248107 cache.go:65] Caching tarball of preloaded images
	I1115 09:06:41.593646  248107 preload.go:238] Found /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:06:41.593661  248107 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:06:41.594034  248107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/config.json ...
	I1115 09:06:41.594060  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/config.json: {Name:mk4981e3557a8519da971ebcf18fd803355391a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:06:41.594212  248107 start.go:360] acquireMachinesLock for addons-663794: {Name:mkd96327c544e60a7a5bc14d0ad542aaa69bb5ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1115 09:06:41.594282  248107 start.go:364] duration metric: took 52.172µs to acquireMachinesLock for "addons-663794"
	I1115 09:06:41.594308  248107 start.go:93] Provisioning new machine with config: &{Name:addons-663794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:06:41.594354  248107 start.go:125] createHost starting for "" (driver="kvm2")
	I1115 09:06:41.595706  248107 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1115 09:06:41.595869  248107 start.go:159] libmachine.API.Create for "addons-663794" (driver="kvm2")
	I1115 09:06:41.595900  248107 client.go:173] LocalClient.Create starting
	I1115 09:06:41.596000  248107 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem
	I1115 09:06:41.963260  248107 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem
	I1115 09:06:42.123428  248107 main.go:143] libmachine: creating domain...
	I1115 09:06:42.123475  248107 main.go:143] libmachine: creating network...
	I1115 09:06:42.124845  248107 main.go:143] libmachine: found existing default network
	I1115 09:06:42.125078  248107 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1115 09:06:42.125602  248107 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001daca50}
	I1115 09:06:42.125705  248107 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-663794</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1115 09:06:42.131411  248107 main.go:143] libmachine: creating private network mk-addons-663794 192.168.39.0/24...
	I1115 09:06:42.199869  248107 main.go:143] libmachine: private network mk-addons-663794 192.168.39.0/24 created
	I1115 09:06:42.200168  248107 main.go:143] libmachine: <network>
	  <name>mk-addons-663794</name>
	  <uuid>a9ac5aa2-0830-4fea-9e79-794861abb986</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:9b:7f:82'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1115 09:06:42.200201  248107 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794 ...
	I1115 09:06:42.200224  248107 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21895-243545/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1115 09:06:42.200240  248107 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:06:42.200308  248107 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21895-243545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21895-243545/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1115 09:06:42.480300  248107 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa...
	I1115 09:06:42.650903  248107 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/addons-663794.rawdisk...
	I1115 09:06:42.650950  248107 main.go:143] libmachine: Writing magic tar header
	I1115 09:06:42.650971  248107 main.go:143] libmachine: Writing SSH key tar header
	I1115 09:06:42.651037  248107 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794 ...
	I1115 09:06:42.651103  248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794
	I1115 09:06:42.651130  248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794 (perms=drwx------)
	I1115 09:06:42.651140  248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21895-243545/.minikube/machines
	I1115 09:06:42.651150  248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21895-243545/.minikube/machines (perms=drwxr-xr-x)
	I1115 09:06:42.651160  248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:06:42.651169  248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21895-243545/.minikube (perms=drwxr-xr-x)
	I1115 09:06:42.651178  248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21895-243545
	I1115 09:06:42.651190  248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21895-243545 (perms=drwxrwxr-x)
	I1115 09:06:42.651201  248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1115 09:06:42.651212  248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1115 09:06:42.651222  248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1115 09:06:42.651232  248107 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1115 09:06:42.651242  248107 main.go:143] libmachine: checking permissions on dir: /home
	I1115 09:06:42.651251  248107 main.go:143] libmachine: skipping /home - not owner
	I1115 09:06:42.651254  248107 main.go:143] libmachine: defining domain...
	I1115 09:06:42.652533  248107 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-663794</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/addons-663794.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-663794'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1115 09:06:42.657733  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:5e:74:5d in network default
	I1115 09:06:42.658286  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:42.658301  248107 main.go:143] libmachine: starting domain...
	I1115 09:06:42.658306  248107 main.go:143] libmachine: ensuring networks are active...
	I1115 09:06:42.659098  248107 main.go:143] libmachine: Ensuring network default is active
	I1115 09:06:42.659520  248107 main.go:143] libmachine: Ensuring network mk-addons-663794 is active
	I1115 09:06:42.660134  248107 main.go:143] libmachine: getting domain XML...
	I1115 09:06:42.661238  248107 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-663794</name>
	  <uuid>39d04125-32a9-467f-ac20-4c898cc459d3</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/addons-663794.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:40:3c:f2'/>
	      <source network='mk-addons-663794'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:5e:74:5d'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1115 09:06:43.919486  248107 main.go:143] libmachine: waiting for domain to start...
	I1115 09:06:43.920783  248107 main.go:143] libmachine: domain is now running
	I1115 09:06:43.920800  248107 main.go:143] libmachine: waiting for IP...
	I1115 09:06:43.921491  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:43.921939  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:43.921953  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:43.922176  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:43.922237  248107 retry.go:31] will retry after 239.594725ms: waiting for domain to come up
	I1115 09:06:44.163728  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:44.164336  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:44.164356  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:44.164701  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:44.164747  248107 retry.go:31] will retry after 362.377021ms: waiting for domain to come up
	I1115 09:06:44.528189  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:44.528763  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:44.528780  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:44.529050  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:44.529089  248107 retry.go:31] will retry after 430.148195ms: waiting for domain to come up
	I1115 09:06:44.960493  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:44.961042  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:44.961059  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:44.961336  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:44.961370  248107 retry.go:31] will retry after 496.012903ms: waiting for domain to come up
	I1115 09:06:45.459109  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:45.459736  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:45.459754  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:45.460091  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:45.460140  248107 retry.go:31] will retry after 627.192444ms: waiting for domain to come up
	I1115 09:06:46.088954  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:46.089579  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:46.089595  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:46.089930  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:46.089963  248107 retry.go:31] will retry after 677.793638ms: waiting for domain to come up
	I1115 09:06:46.768982  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:46.769589  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:46.769601  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:46.769937  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:46.769976  248107 retry.go:31] will retry after 1.101499246s: waiting for domain to come up
	I1115 09:06:47.873250  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:47.873818  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:47.873836  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:47.874118  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:47.874157  248107 retry.go:31] will retry after 1.167236905s: waiting for domain to come up
	I1115 09:06:49.043143  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:49.043842  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:49.043865  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:49.044260  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:49.044308  248107 retry.go:31] will retry after 1.619569537s: waiting for domain to come up
	I1115 09:06:50.666152  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:50.666735  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:50.666751  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:50.667031  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:50.667069  248107 retry.go:31] will retry after 1.790503798s: waiting for domain to come up
	I1115 09:06:52.459395  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:52.460021  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:52.460047  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:52.460410  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:52.460471  248107 retry.go:31] will retry after 2.798447952s: waiting for domain to come up
	I1115 09:06:55.262422  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:55.266122  248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
	I1115 09:06:55.266152  248107 main.go:143] libmachine: trying to list again with source=arp
	I1115 09:06:55.266515  248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
	I1115 09:06:55.266560  248107 retry.go:31] will retry after 2.822652152s: waiting for domain to come up
	I1115 09:06:58.091739  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.092322  248107 main.go:143] libmachine: domain addons-663794 has current primary IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.092334  248107 main.go:143] libmachine: found domain IP: 192.168.39.78
	I1115 09:06:58.092342  248107 main.go:143] libmachine: reserving static IP address...
	I1115 09:06:58.092717  248107 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-663794", mac: "52:54:00:40:3c:f2", ip: "192.168.39.78"} in network mk-addons-663794
	I1115 09:06:58.276722  248107 main.go:143] libmachine: reserved static IP address 192.168.39.78 for domain addons-663794
	I1115 09:06:58.276751  248107 main.go:143] libmachine: waiting for SSH...
	I1115 09:06:58.276757  248107 main.go:143] libmachine: Getting to WaitForSSH function...
	I1115 09:06:58.280280  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.280855  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:58.280893  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.281128  248107 main.go:143] libmachine: Using SSH client type: native
	I1115 09:06:58.281438  248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1115 09:06:58.281470  248107 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1115 09:06:58.421301  248107 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:06:58.421695  248107 main.go:143] libmachine: domain creation complete
	I1115 09:06:58.422991  248107 machine.go:94] provisionDockerMachine start ...
	I1115 09:06:58.425372  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.425775  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:58.425821  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.426013  248107 main.go:143] libmachine: Using SSH client type: native
	I1115 09:06:58.426223  248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1115 09:06:58.426235  248107 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:06:58.538245  248107 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1115 09:06:58.538282  248107 buildroot.go:166] provisioning hostname "addons-663794"
	I1115 09:06:58.541050  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.541417  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:58.541463  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.541627  248107 main.go:143] libmachine: Using SSH client type: native
	I1115 09:06:58.541840  248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1115 09:06:58.541855  248107 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-663794 && echo "addons-663794" | sudo tee /etc/hostname
	I1115 09:06:58.668477  248107 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-663794
	
	I1115 09:06:58.671319  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.671744  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:58.671769  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.671950  248107 main.go:143] libmachine: Using SSH client type: native
	I1115 09:06:58.672143  248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1115 09:06:58.672165  248107 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-663794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-663794/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-663794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:06:58.790822  248107 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:06:58.790863  248107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21895-243545/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-243545/.minikube}
	I1115 09:06:58.790934  248107 buildroot.go:174] setting up certificates
	I1115 09:06:58.790950  248107 provision.go:84] configureAuth start
	I1115 09:06:58.794550  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.795050  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:58.795078  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.797323  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.797776  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:58.797796  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.797926  248107 provision.go:143] copyHostCerts
	I1115 09:06:58.797994  248107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/cert.pem (1123 bytes)
	I1115 09:06:58.798105  248107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/key.pem (1675 bytes)
	I1115 09:06:58.798174  248107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/ca.pem (1082 bytes)
	I1115 09:06:58.798227  248107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem org=jenkins.addons-663794 san=[127.0.0.1 192.168.39.78 addons-663794 localhost minikube]
	I1115 09:06:58.991943  248107 provision.go:177] copyRemoteCerts
	I1115 09:06:58.992015  248107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:06:58.994620  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.994949  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:58.994969  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:58.995155  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:06:59.081836  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:06:59.118959  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:06:59.154779  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:06:59.184147  248107 provision.go:87] duration metric: took 393.17264ms to configureAuth
	I1115 09:06:59.184188  248107 buildroot.go:189] setting minikube options for container-runtime
	I1115 09:06:59.184384  248107 config.go:182] Loaded profile config "addons-663794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:06:59.187291  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.187725  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:59.187752  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.187912  248107 main.go:143] libmachine: Using SSH client type: native
	I1115 09:06:59.188111  248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1115 09:06:59.188126  248107 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:06:59.437357  248107 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:06:59.437386  248107 machine.go:97] duration metric: took 1.014375288s to provisionDockerMachine
	I1115 09:06:59.437399  248107 client.go:176] duration metric: took 17.841489233s to LocalClient.Create
	I1115 09:06:59.437418  248107 start.go:167] duration metric: took 17.84154843s to libmachine.API.Create "addons-663794"
	I1115 09:06:59.437428  248107 start.go:293] postStartSetup for "addons-663794" (driver="kvm2")
	I1115 09:06:59.437453  248107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:06:59.437539  248107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:06:59.440660  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.441163  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:59.441197  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.441375  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:06:59.526263  248107 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:06:59.531006  248107 info.go:137] Remote host: Buildroot 2025.02
	I1115 09:06:59.531030  248107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-243545/.minikube/addons for local assets ...
	I1115 09:06:59.531120  248107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-243545/.minikube/files for local assets ...
	I1115 09:06:59.531166  248107 start.go:296] duration metric: took 93.726582ms for postStartSetup
	I1115 09:06:59.534046  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.534421  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:59.534462  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.534687  248107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/config.json ...
	I1115 09:06:59.534885  248107 start.go:128] duration metric: took 17.940518976s to createHost
	I1115 09:06:59.536964  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.537419  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:59.537494  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.537707  248107 main.go:143] libmachine: Using SSH client type: native
	I1115 09:06:59.537933  248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1115 09:06:59.537956  248107 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1115 09:06:59.645834  248107 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763197619.602217728
	
	I1115 09:06:59.645865  248107 fix.go:216] guest clock: 1763197619.602217728
	I1115 09:06:59.645872  248107 fix.go:229] Guest: 2025-11-15 09:06:59.602217728 +0000 UTC Remote: 2025-11-15 09:06:59.53489904 +0000 UTC m=+18.035095309 (delta=67.318688ms)
	I1115 09:06:59.645888  248107 fix.go:200] guest clock delta is within tolerance: 67.318688ms
	I1115 09:06:59.645893  248107 start.go:83] releasing machines lock for "addons-663794", held for 18.051598507s
	I1115 09:06:59.648983  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.649502  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:59.649540  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.650144  248107 ssh_runner.go:195] Run: cat /version.json
	I1115 09:06:59.650226  248107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:06:59.653475  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.653674  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.653900  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:59.653925  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.654064  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:06:59.654067  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:06:59.654093  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:06:59.654290  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:06:59.733122  248107 ssh_runner.go:195] Run: systemctl --version
	I1115 09:06:59.763753  248107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:06:59.922347  248107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:06:59.928971  248107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:06:59.929041  248107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:06:59.949165  248107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:06:59.949214  248107 start.go:496] detecting cgroup driver to use...
	I1115 09:06:59.949294  248107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:06:59.968637  248107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:06:59.985036  248107 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:06:59.985124  248107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:07:00.002145  248107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:07:00.018433  248107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:07:00.165090  248107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:07:00.378483  248107 docker.go:234] disabling docker service ...
	I1115 09:07:00.378556  248107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:07:00.396010  248107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:07:00.410656  248107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:07:00.582889  248107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:07:00.729737  248107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:07:00.746127  248107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:07:00.768365  248107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:07:00.768440  248107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:07:00.780573  248107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 09:07:00.780680  248107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:07:00.793379  248107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:07:00.807346  248107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:07:00.820241  248107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:07:00.833951  248107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:07:00.846402  248107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:07:00.866167  248107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:07:00.878800  248107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:07:00.889148  248107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1115 09:07:00.889223  248107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1115 09:07:00.908703  248107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:07:00.920326  248107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:07:01.052502  248107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:07:01.158647  248107 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:07:01.158749  248107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:07:01.163765  248107 start.go:564] Will wait 60s for crictl version
	I1115 09:07:01.163867  248107 ssh_runner.go:195] Run: which crictl
	I1115 09:07:01.167823  248107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1115 09:07:01.208507  248107 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1115 09:07:01.208598  248107 ssh_runner.go:195] Run: crio --version
	I1115 09:07:01.236835  248107 ssh_runner.go:195] Run: crio --version
	I1115 09:07:01.269285  248107 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1115 09:07:01.273203  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:01.273666  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:01.273696  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:01.273885  248107 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1115 09:07:01.278502  248107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:07:01.293649  248107 kubeadm.go:884] updating cluster {Name:addons-663794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:07:01.293756  248107 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:07:01.293797  248107 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:07:01.329320  248107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 09:07:01.329399  248107 ssh_runner.go:195] Run: which lz4
	I1115 09:07:01.333687  248107 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1115 09:07:01.338288  248107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1115 09:07:01.338318  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1115 09:07:02.657257  248107 crio.go:462] duration metric: took 1.323614187s to copy over tarball
	I1115 09:07:02.657335  248107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1115 09:07:04.208081  248107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.55072014s)
	I1115 09:07:04.208106  248107 crio.go:469] duration metric: took 1.550816094s to extract the tarball
	I1115 09:07:04.208114  248107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1115 09:07:04.248160  248107 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:07:04.296016  248107 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:07:04.296040  248107 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:07:04.296048  248107 kubeadm.go:935] updating node { 192.168.39.78 8443 v1.34.1 crio true true} ...
	I1115 09:07:04.296149  248107 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-663794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:07:04.296216  248107 ssh_runner.go:195] Run: crio config
	I1115 09:07:04.340736  248107 cni.go:84] Creating CNI manager for ""
	I1115 09:07:04.340780  248107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:07:04.340805  248107 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:07:04.340840  248107 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-663794 NodeName:addons-663794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:07:04.341031  248107 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-663794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:07:04.341115  248107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:07:04.352538  248107 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:07:04.352613  248107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:07:04.365470  248107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1115 09:07:04.387028  248107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:07:04.407121  248107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1115 09:07:04.426602  248107 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I1115 09:07:04.430681  248107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:07:04.444673  248107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:07:04.582233  248107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:07:04.612886  248107 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794 for IP: 192.168.39.78
	I1115 09:07:04.612907  248107 certs.go:195] generating shared ca certs ...
	I1115 09:07:04.612924  248107 certs.go:227] acquiring lock for ca certs: {Name:mk5e9c8388448c40ecbfe3d7332e5965c3ae4b4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:04.613114  248107 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key
	I1115 09:07:04.886492  248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt ...
	I1115 09:07:04.886525  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt: {Name:mk716662fde1df6affa6446a5e91abc5c8085d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:04.886737  248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key ...
	I1115 09:07:04.886751  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key: {Name:mk43adaff6151548c227d0b30489e49a7901a10b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:04.886843  248107 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key
	I1115 09:07:05.192768  248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.crt ...
	I1115 09:07:05.192807  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.crt: {Name:mkae4e4311952cda911f41d7a2357cfe0b8cdbf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:05.192993  248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key ...
	I1115 09:07:05.193007  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key: {Name:mk6a082586b2c55a45c718f609b69033934617eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:05.193096  248107 certs.go:257] generating profile certs ...
	I1115 09:07:05.193160  248107 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.key
	I1115 09:07:05.193185  248107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt with IP's: []
	I1115 09:07:05.273214  248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt ...
	I1115 09:07:05.273246  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: {Name:mkff9cbf722a83a5166951c6a00c0dd7ae3051a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:05.273409  248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.key ...
	I1115 09:07:05.273421  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.key: {Name:mk1e2f88357869296dc30c00ecf355d769532b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:05.273503  248107 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key.8101946b
	I1115 09:07:05.273522  248107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt.8101946b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78]
	I1115 09:07:05.333253  248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt.8101946b ...
	I1115 09:07:05.333284  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt.8101946b: {Name:mkfe20529a56d056e474711d95ffc98e9dffd8d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:05.333455  248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key.8101946b ...
	I1115 09:07:05.333468  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key.8101946b: {Name:mke32129a2275fb1044c3b2819e0014da7333d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:05.333543  248107 certs.go:382] copying /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt.8101946b -> /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt
	I1115 09:07:05.333617  248107 certs.go:386] copying /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key.8101946b -> /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key
	I1115 09:07:05.333664  248107 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.key
	I1115 09:07:05.333682  248107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.crt with IP's: []
	I1115 09:07:05.451343  248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.crt ...
	I1115 09:07:05.451374  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.crt: {Name:mk7d0bfb9bbe7381b8c5f53d09c41020c3e45f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:05.451556  248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.key ...
	I1115 09:07:05.451572  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.key: {Name:mk2f123e88e944188fc34e55170d3285e7b9191b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:05.451747  248107 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:07:05.451782  248107 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:07:05.451805  248107 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:07:05.451834  248107 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem (1675 bytes)
	I1115 09:07:05.452372  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:07:05.497566  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 09:07:05.532544  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:07:05.561477  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1115 09:07:05.589159  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:07:05.617842  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:07:05.647537  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:07:05.676382  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:07:05.704578  248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:07:05.733288  248107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:07:05.753028  248107 ssh_runner.go:195] Run: openssl version
	I1115 09:07:05.759710  248107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:07:05.772349  248107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:07:05.777309  248107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:07 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:07:05.777376  248107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:07:05.784754  248107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:07:05.797631  248107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:07:05.802359  248107 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:07:05.802421  248107 kubeadm.go:401] StartCluster: {Name:addons-663794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:07:05.802552  248107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:07:05.802616  248107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:07:05.842438  248107 cri.go:89] found id: ""
	I1115 09:07:05.842536  248107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:07:05.854429  248107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:07:05.866197  248107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:07:05.877700  248107 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:07:05.877725  248107 kubeadm.go:158] found existing configuration files:
	
	I1115 09:07:05.877774  248107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:07:05.888409  248107 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:07:05.888487  248107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:07:05.899499  248107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:07:05.909878  248107 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:07:05.909943  248107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:07:05.921999  248107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:07:05.932397  248107 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:07:05.932489  248107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:07:05.944349  248107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:07:05.955674  248107 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:07:05.955757  248107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:07:05.967200  248107 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1115 09:07:06.137144  248107 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:07:18.006164  248107 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:07:18.006241  248107 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:07:18.006349  248107 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:07:18.006535  248107 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:07:18.006683  248107 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:07:18.006782  248107 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:07:18.008655  248107 out.go:252]   - Generating certificates and keys ...
	I1115 09:07:18.008749  248107 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:07:18.008835  248107 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:07:18.008945  248107 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:07:18.009030  248107 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:07:18.009122  248107 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:07:18.009194  248107 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:07:18.009276  248107 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:07:18.009440  248107 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-663794 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I1115 09:07:18.009534  248107 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:07:18.009691  248107 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-663794 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I1115 09:07:18.009785  248107 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:07:18.009891  248107 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:07:18.009962  248107 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:07:18.010040  248107 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:07:18.010117  248107 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:07:18.010199  248107 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:07:18.010278  248107 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:07:18.010380  248107 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:07:18.010436  248107 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:07:18.010519  248107 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:07:18.010573  248107 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:07:18.011819  248107 out.go:252]   - Booting up control plane ...
	I1115 09:07:18.011940  248107 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:07:18.012053  248107 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:07:18.012146  248107 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:07:18.012275  248107 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:07:18.012362  248107 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:07:18.012465  248107 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:07:18.012549  248107 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:07:18.012584  248107 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:07:18.012694  248107 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:07:18.012786  248107 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:07:18.012867  248107 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002791632s
	I1115 09:07:18.013017  248107 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:07:18.013138  248107 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.78:8443/livez
	I1115 09:07:18.013263  248107 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:07:18.013370  248107 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:07:18.013493  248107 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.780182478s
	I1115 09:07:18.013556  248107 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.324840451s
	I1115 09:07:18.013619  248107 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.503632372s
	I1115 09:07:18.013711  248107 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:07:18.013834  248107 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:07:18.013919  248107 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:07:18.014086  248107 kubeadm.go:319] [mark-control-plane] Marking the node addons-663794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:07:18.014137  248107 kubeadm.go:319] [bootstrap-token] Using token: bi6n1i.svktgwn7kozvn22r
	I1115 09:07:18.015425  248107 out.go:252]   - Configuring RBAC rules ...
	I1115 09:07:18.015529  248107 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:07:18.015638  248107 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:07:18.015779  248107 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:07:18.015906  248107 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:07:18.016006  248107 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:07:18.016096  248107 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:07:18.016264  248107 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:07:18.016310  248107 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:07:18.016348  248107 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:07:18.016354  248107 kubeadm.go:319] 
	I1115 09:07:18.016405  248107 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:07:18.016411  248107 kubeadm.go:319] 
	I1115 09:07:18.016495  248107 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:07:18.016513  248107 kubeadm.go:319] 
	I1115 09:07:18.016560  248107 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:07:18.016643  248107 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:07:18.016718  248107 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:07:18.016727  248107 kubeadm.go:319] 
	I1115 09:07:18.016798  248107 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:07:18.016812  248107 kubeadm.go:319] 
	I1115 09:07:18.016887  248107 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:07:18.016900  248107 kubeadm.go:319] 
	I1115 09:07:18.016958  248107 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:07:18.017019  248107 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:07:18.017074  248107 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:07:18.017079  248107 kubeadm.go:319] 
	I1115 09:07:18.017157  248107 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:07:18.017220  248107 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:07:18.017226  248107 kubeadm.go:319] 
	I1115 09:07:18.017319  248107 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bi6n1i.svktgwn7kozvn22r \
	I1115 09:07:18.017465  248107 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:955850964525d9732287aff1ea5d847a03627ee2de071247980c680415246b6c \
	I1115 09:07:18.017499  248107 kubeadm.go:319] 	--control-plane 
	I1115 09:07:18.017508  248107 kubeadm.go:319] 
	I1115 09:07:18.017596  248107 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:07:18.017606  248107 kubeadm.go:319] 
	I1115 09:07:18.017670  248107 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bi6n1i.svktgwn7kozvn22r \
	I1115 09:07:18.017779  248107 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:955850964525d9732287aff1ea5d847a03627ee2de071247980c680415246b6c 
	I1115 09:07:18.017790  248107 cni.go:84] Creating CNI manager for ""
	I1115 09:07:18.017797  248107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:07:18.019302  248107 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1115 09:07:18.020439  248107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1115 09:07:18.036245  248107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1115 09:07:18.058501  248107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:07:18.058589  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:18.058627  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-663794 minikube.k8s.io/updated_at=2025_11_15T09_07_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=addons-663794 minikube.k8s.io/primary=true
	I1115 09:07:18.102481  248107 ops.go:34] apiserver oom_adj: -16
	I1115 09:07:18.228632  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:18.729029  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:19.228837  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:19.728895  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:20.228760  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:20.728807  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:21.229085  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:21.729488  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:22.229755  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:22.728749  248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:07:22.848117  248107 kubeadm.go:1114] duration metric: took 4.789603502s to wait for elevateKubeSystemPrivileges
	I1115 09:07:22.848165  248107 kubeadm.go:403] duration metric: took 17.045749764s to StartCluster
	I1115 09:07:22.848191  248107 settings.go:142] acquiring lock: {Name:mk00f9aa5a46ce077bf17ee5efb58b1b4c2cdbac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:22.848351  248107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:07:22.849075  248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/kubeconfig: {Name:mk85b3ca0ac5a906394239d54dc0b40d127f71ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:07:22.849361  248107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:07:22.849400  248107 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:07:22.849470  248107 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 09:07:22.849617  248107 addons.go:70] Setting yakd=true in profile "addons-663794"
	I1115 09:07:22.849640  248107 addons.go:239] Setting addon yakd=true in "addons-663794"
	I1115 09:07:22.849634  248107 addons.go:70] Setting cloud-spanner=true in profile "addons-663794"
	I1115 09:07:22.849664  248107 addons.go:239] Setting addon cloud-spanner=true in "addons-663794"
	I1115 09:07:22.849664  248107 config.go:182] Loaded profile config "addons-663794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:07:22.849680  248107 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-663794"
	I1115 09:07:22.849693  248107 addons.go:70] Setting default-storageclass=true in profile "addons-663794"
	I1115 09:07:22.849699  248107 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-663794"
	I1115 09:07:22.849699  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.849703  248107 addons.go:70] Setting registry=true in profile "addons-663794"
	I1115 09:07:22.849711  248107 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-663794"
	I1115 09:07:22.849718  248107 addons.go:239] Setting addon registry=true in "addons-663794"
	I1115 09:07:22.849671  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.849722  248107 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-663794"
	I1115 09:07:22.849743  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.849744  248107 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-663794"
	I1115 09:07:22.849773  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.850615  248107 addons.go:70] Setting volcano=true in profile "addons-663794"
	I1115 09:07:22.850631  248107 addons.go:70] Setting registry-creds=true in profile "addons-663794"
	I1115 09:07:22.850638  248107 addons.go:239] Setting addon volcano=true in "addons-663794"
	I1115 09:07:22.850650  248107 addons.go:239] Setting addon registry-creds=true in "addons-663794"
	I1115 09:07:22.850671  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.850683  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.849675  248107 addons.go:70] Setting storage-provisioner=true in profile "addons-663794"
	I1115 09:07:22.850800  248107 addons.go:239] Setting addon storage-provisioner=true in "addons-663794"
	I1115 09:07:22.850875  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.850975  248107 addons.go:70] Setting ingress=true in profile "addons-663794"
	I1115 09:07:22.850994  248107 addons.go:239] Setting addon ingress=true in "addons-663794"
	I1115 09:07:22.851043  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.849654  248107 addons.go:70] Setting metrics-server=true in profile "addons-663794"
	I1115 09:07:22.851076  248107 addons.go:239] Setting addon metrics-server=true in "addons-663794"
	I1115 09:07:22.851103  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.851281  248107 addons.go:70] Setting gcp-auth=true in profile "addons-663794"
	I1115 09:07:22.851306  248107 mustload.go:66] Loading cluster: addons-663794
	I1115 09:07:22.851506  248107 config.go:182] Loaded profile config "addons-663794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:07:22.849684  248107 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-663794"
	I1115 09:07:22.849695  248107 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-663794"
	I1115 09:07:22.851664  248107 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-663794"
	I1115 09:07:22.851699  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.851741  248107 addons.go:70] Setting volumesnapshots=true in profile "addons-663794"
	I1115 09:07:22.851764  248107 addons.go:70] Setting ingress-dns=true in profile "addons-663794"
	I1115 09:07:22.851772  248107 addons.go:239] Setting addon volumesnapshots=true in "addons-663794"
	I1115 09:07:22.851776  248107 addons.go:239] Setting addon ingress-dns=true in "addons-663794"
	I1115 09:07:22.851801  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.851817  248107 addons.go:70] Setting inspektor-gadget=true in profile "addons-663794"
	I1115 09:07:22.851828  248107 addons.go:239] Setting addon inspektor-gadget=true in "addons-663794"
	I1115 09:07:22.851855  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.851641  248107 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-663794"
	I1115 09:07:22.852069  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.851803  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.852975  248107 out.go:179] * Verifying Kubernetes components...
	I1115 09:07:22.854256  248107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:07:22.858948  248107 addons.go:239] Setting addon default-storageclass=true in "addons-663794"
	I1115 09:07:22.858959  248107 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-663794"
	I1115 09:07:22.858989  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.858998  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.859333  248107 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 09:07:22.859344  248107 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 09:07:22.859417  248107 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 09:07:22.859357  248107 out.go:179]   - Using image docker.io/registry:3.0.0
	W1115 09:07:22.859938  248107 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 09:07:22.860621  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:22.860813  248107 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 09:07:22.860833  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 09:07:22.861540  248107 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:07:22.861548  248107 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 09:07:22.861620  248107 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:07:22.861637  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 09:07:22.861541  248107 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 09:07:22.862336  248107 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 09:07:22.862759  248107 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 09:07:22.862787  248107 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 09:07:22.862784  248107 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 09:07:22.862884  248107 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 09:07:22.862972  248107 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:07:22.863370  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 09:07:22.863687  248107 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:07:22.864192  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:07:22.864571  248107 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 09:07:22.864595  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 09:07:22.864606  248107 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:07:22.864619  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 09:07:22.864643  248107 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 09:07:22.864571  248107 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 09:07:22.865061  248107 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:07:22.865071  248107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:07:22.865071  248107 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 09:07:22.864675  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 09:07:22.864707  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 09:07:22.865549  248107 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 09:07:22.866364  248107 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:07:22.866381  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 09:07:22.866380  248107 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 09:07:22.867285  248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 09:07:22.867301  248107 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 09:07:22.868118  248107 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:07:22.868159  248107 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:07:22.868543  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 09:07:22.868907  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 09:07:22.869639  248107 out.go:179]   - Using image docker.io/busybox:stable
	I1115 09:07:22.870365  248107 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:07:22.870674  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.871205  248107 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:07:22.871221  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 09:07:22.871401  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.871935  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 09:07:22.872146  248107 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:07:22.872167  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 09:07:22.872629  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.872687  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.872821  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.872873  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.872957  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.873520  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.873596  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.873945  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.874574  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.874612  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.875070  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.875468  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.875538  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.875571  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.876154  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.876662  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.876829  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.876931  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.876966  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.877521  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 09:07:22.877641  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.877650  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.877675  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.877681  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.877791  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.878517  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.878533  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.878733  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.879234  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.879263  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.879300  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.879502  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.879664  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.879955  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.879987  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.880031  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.880201  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 09:07:22.880387  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.880577  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.880613  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.880789  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.880825  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.880858  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.880947  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.881212  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.881680  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.881703  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.881876  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.882064  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.882070  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.882568  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.882629  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.882664  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.882700  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.882634  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 09:07:22.882900  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.882906  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:22.884982  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 09:07:22.885982  248107 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 09:07:22.886790  248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 09:07:22.886806  248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 09:07:22.889623  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.890087  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:22.890112  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:22.890278  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	W1115 09:07:23.298991  248107 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44588->192.168.39.78:22: read: connection reset by peer
	I1115 09:07:23.299029  248107 retry.go:31] will retry after 366.623498ms: ssh: handshake failed: read tcp 192.168.39.1:44588->192.168.39.78:22: read: connection reset by peer
	I1115 09:07:23.852976  248107 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 09:07:23.853003  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 09:07:23.935387  248107 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 09:07:23.935414  248107 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 09:07:23.946234  248107 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 09:07:23.946257  248107 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 09:07:23.956926  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:07:23.957619  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:07:23.962879  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:07:23.963937  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:07:24.069313  248107 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 09:07:24.069341  248107 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 09:07:24.076533  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:07:24.108238  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:07:24.139246  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:07:24.199045  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:07:24.208090  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 09:07:24.307325  248107 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 09:07:24.307356  248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 09:07:24.342774  248107 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.493374848s)
	I1115 09:07:24.342832  248107 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.488547473s)
	I1115 09:07:24.342925  248107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:07:24.342974  248107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:07:24.361260  248107 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 09:07:24.361288  248107 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 09:07:24.376832  248107 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 09:07:24.376856  248107 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 09:07:24.441920  248107 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 09:07:24.441943  248107 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:07:24.441954  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 09:07:24.441954  248107 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 09:07:24.659261  248107 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 09:07:24.659302  248107 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 09:07:24.714316  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:07:24.754813  248107 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 09:07:24.754856  248107 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 09:07:24.823376  248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 09:07:24.823406  248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 09:07:24.847828  248107 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:07:24.847863  248107 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 09:07:24.883883  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:07:24.890982  248107 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:07:24.891016  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 09:07:25.025347  248107 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 09:07:25.025382  248107 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 09:07:25.154340  248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 09:07:25.154372  248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 09:07:25.190218  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:07:25.201352  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:07:25.427831  248107 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:07:25.427863  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 09:07:25.566631  248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 09:07:25.566663  248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 09:07:25.814586  248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 09:07:25.814613  248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 09:07:25.856919  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:07:25.896013  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.939047246s)
	I1115 09:07:25.896075  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.93842362s)
	I1115 09:07:25.975633  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.012718036s)
	I1115 09:07:26.146929  248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 09:07:26.146967  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 09:07:26.672229  248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 09:07:26.672260  248107 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 09:07:27.064960  248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 09:07:27.064984  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 09:07:27.382699  248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 09:07:27.382723  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 09:07:27.811481  248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:07:27.811507  248107 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 09:07:28.073076  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:07:29.133580  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.169606655s)
	I1115 09:07:30.307005  248107 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 09:07:30.310176  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:30.310716  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:30.310744  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:30.310923  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:30.667161  248107 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 09:07:30.849901  248107 addons.go:239] Setting addon gcp-auth=true in "addons-663794"
	I1115 09:07:30.849955  248107 host.go:66] Checking if "addons-663794" exists ...
	I1115 09:07:30.851964  248107 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 09:07:30.854731  248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:30.855216  248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
	I1115 09:07:30.855241  248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
	I1115 09:07:30.855468  248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
	I1115 09:07:31.702326  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.625745813s)
	I1115 09:07:31.702373  248107 addons.go:480] Verifying addon ingress=true in "addons-663794"
	I1115 09:07:31.702372  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.594089756s)
	I1115 09:07:31.702536  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.563261864s)
	I1115 09:07:31.702588  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.503516365s)
	I1115 09:07:31.702649  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.494531977s)
	I1115 09:07:31.702686  248107 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.359741673s)
	I1115 09:07:31.702711  248107 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.359715763s)
	I1115 09:07:31.702728  248107 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1115 09:07:31.702791  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.988410815s)
	I1115 09:07:31.702833  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.818914491s)
	I1115 09:07:31.702929  248107 addons.go:480] Verifying addon registry=true in "addons-663794"
	I1115 09:07:31.702952  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.512699412s)
	I1115 09:07:31.702976  248107 addons.go:480] Verifying addon metrics-server=true in "addons-663794"
	I1115 09:07:31.702996  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.501600728s)
	I1115 09:07:31.703666  248107 node_ready.go:35] waiting up to 6m0s for node "addons-663794" to be "Ready" ...
	I1115 09:07:31.705154  248107 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-663794 service yakd-dashboard -n yakd-dashboard
	
	I1115 09:07:31.705167  248107 out.go:179] * Verifying registry addon...
	I1115 09:07:31.705161  248107 out.go:179] * Verifying ingress addon...
	I1115 09:07:31.707129  248107 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 09:07:31.707425  248107 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 09:07:31.742288  248107 node_ready.go:49] node "addons-663794" is "Ready"
	I1115 09:07:31.742323  248107 node_ready.go:38] duration metric: took 38.616725ms for node "addons-663794" to be "Ready" ...
	I1115 09:07:31.742339  248107 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:07:31.742392  248107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:07:31.778333  248107 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:07:31.778364  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:31.779070  248107 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 09:07:31.779090  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:07:31.793365  248107 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1115 09:07:32.029695  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.172729567s)
	W1115 09:07:32.029750  248107 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:07:32.029797  248107 retry.go:31] will retry after 132.54435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:07:32.162587  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:07:32.211945  248107 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-663794" context rescaled to 1 replicas
	I1115 09:07:32.218410  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:32.218627  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:32.801830  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:32.802587  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:33.086717  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.013588129s)
	I1115 09:07:33.086767  248107 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-663794"
	I1115 09:07:33.086768  248107 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.234777146s)
	I1115 09:07:33.086868  248107 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.344455453s)
	I1115 09:07:33.086911  248107 api_server.go:72] duration metric: took 10.237478946s to wait for apiserver process to appear ...
	I1115 09:07:33.086922  248107 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:07:33.086945  248107 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I1115 09:07:33.089135  248107 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:07:33.089172  248107 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 09:07:33.090297  248107 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 09:07:33.091208  248107 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 09:07:33.091296  248107 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 09:07:33.091312  248107 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 09:07:33.138023  248107 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I1115 09:07:33.139155  248107 api_server.go:141] control plane version: v1.34.1
	I1115 09:07:33.139186  248107 api_server.go:131] duration metric: took 52.255311ms to wait for apiserver health ...
	I1115 09:07:33.139199  248107 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:07:33.147800  248107 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:07:33.147828  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:33.158009  248107 system_pods.go:59] 20 kube-system pods found
	I1115 09:07:33.158051  248107 system_pods.go:61] "amd-gpu-device-plugin-wqpn5" [d0adea6d-3b3e-41d2-8340-2d42b53060e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:07:33.158060  248107 system_pods.go:61] "coredns-66bc5c9577-8jkds" [dd0f8515-daad-4d10-9aba-fcd0e8b6e400] Running
	I1115 09:07:33.158069  248107 system_pods.go:61] "coredns-66bc5c9577-cm284" [23ab3d77-85ec-40f3-afff-0a20ae3716f2] Running
	I1115 09:07:33.158079  248107 system_pods.go:61] "csi-hostpath-attacher-0" [f3ac8c44-e97a-415a-9da5-2861ac50ed3c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:07:33.158089  248107 system_pods.go:61] "csi-hostpath-resizer-0" [7d461536-20d4-4e76-ad1a-a96a3fad5a61] Pending
	I1115 09:07:33.158098  248107 system_pods.go:61] "csi-hostpathplugin-zsbwn" [6717d9e7-923f-476e-97d5-2384885e4838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:07:33.158105  248107 system_pods.go:61] "etcd-addons-663794" [6900238e-53a4-4ca1-a620-18fcc9a25270] Running
	I1115 09:07:33.158112  248107 system_pods.go:61] "kube-apiserver-addons-663794" [3091663f-e6e7-4f57-88c7-6992940c38c9] Running
	I1115 09:07:33.158122  248107 system_pods.go:61] "kube-controller-manager-addons-663794" [6dbf7d10-e0f0-4d7e-a183-1055122ae05d] Running
	I1115 09:07:33.158131  248107 system_pods.go:61] "kube-ingress-dns-minikube" [25109e09-af9d-420d-b989-529552614336] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:07:33.158138  248107 system_pods.go:61] "kube-proxy-kjfgf" [3eeef006-089f-401e-956f-df7c8c9d9a44] Running
	I1115 09:07:33.158145  248107 system_pods.go:61] "kube-scheduler-addons-663794" [0d5017fe-6032-4daf-a785-cf42e429886f] Running
	I1115 09:07:33.158155  248107 system_pods.go:61] "metrics-server-85b7d694d7-z4cnh" [a5aaf6d1-1d0d-439f-a5f1-50cd9a24a185] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:07:33.158168  248107 system_pods.go:61] "nvidia-device-plugin-daemonset-tz8vm" [7fa140f3-685f-4d2a-8467-05ffa2701601] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:07:33.158177  248107 system_pods.go:61] "registry-6b586f9694-hgvh6" [76662db3-ff4c-4ca1-8587-5d8f12c77a66] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:07:33.158186  248107 system_pods.go:61] "registry-creds-764b6fb674-ckjls" [05e09078-15a0-4a10-bbcf-6ef46b064286] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:07:33.158195  248107 system_pods.go:61] "registry-proxy-9tkz8" [527b58a0-a1f0-4419-ac42-b4de22cf8ccb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:07:33.158202  248107 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6cbw4" [33beb311-2ed8-4dbd-a0e0-297d6eccc21a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:07:33.158212  248107 system_pods.go:61] "snapshot-controller-7d9fbc56b8-f6qhx" [44385801-5893-4318-b18c-25f5dbedef16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:07:33.158222  248107 system_pods.go:61] "storage-provisioner" [5890e29d-b25e-40cb-ae66-27c0be7f0c73] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:07:33.158231  248107 system_pods.go:74] duration metric: took 19.024157ms to wait for pod list to return data ...
	I1115 09:07:33.158244  248107 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:07:33.185286  248107 default_sa.go:45] found service account: "default"
	I1115 09:07:33.185315  248107 default_sa.go:55] duration metric: took 27.063806ms for default service account to be created ...
	I1115 09:07:33.185327  248107 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:07:33.202151  248107 system_pods.go:86] 20 kube-system pods found
	I1115 09:07:33.202191  248107 system_pods.go:89] "amd-gpu-device-plugin-wqpn5" [d0adea6d-3b3e-41d2-8340-2d42b53060e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:07:33.202200  248107 system_pods.go:89] "coredns-66bc5c9577-8jkds" [dd0f8515-daad-4d10-9aba-fcd0e8b6e400] Running
	I1115 09:07:33.202208  248107 system_pods.go:89] "coredns-66bc5c9577-cm284" [23ab3d77-85ec-40f3-afff-0a20ae3716f2] Running
	I1115 09:07:33.202250  248107 system_pods.go:89] "csi-hostpath-attacher-0" [f3ac8c44-e97a-415a-9da5-2861ac50ed3c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:07:33.202261  248107 system_pods.go:89] "csi-hostpath-resizer-0" [7d461536-20d4-4e76-ad1a-a96a3fad5a61] Pending
	I1115 09:07:33.202272  248107 system_pods.go:89] "csi-hostpathplugin-zsbwn" [6717d9e7-923f-476e-97d5-2384885e4838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:07:33.202281  248107 system_pods.go:89] "etcd-addons-663794" [6900238e-53a4-4ca1-a620-18fcc9a25270] Running
	I1115 09:07:33.202289  248107 system_pods.go:89] "kube-apiserver-addons-663794" [3091663f-e6e7-4f57-88c7-6992940c38c9] Running
	I1115 09:07:33.202295  248107 system_pods.go:89] "kube-controller-manager-addons-663794" [6dbf7d10-e0f0-4d7e-a183-1055122ae05d] Running
	I1115 09:07:33.202309  248107 system_pods.go:89] "kube-ingress-dns-minikube" [25109e09-af9d-420d-b989-529552614336] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:07:33.202315  248107 system_pods.go:89] "kube-proxy-kjfgf" [3eeef006-089f-401e-956f-df7c8c9d9a44] Running
	I1115 09:07:33.202322  248107 system_pods.go:89] "kube-scheduler-addons-663794" [0d5017fe-6032-4daf-a785-cf42e429886f] Running
	I1115 09:07:33.202333  248107 system_pods.go:89] "metrics-server-85b7d694d7-z4cnh" [a5aaf6d1-1d0d-439f-a5f1-50cd9a24a185] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:07:33.202345  248107 system_pods.go:89] "nvidia-device-plugin-daemonset-tz8vm" [7fa140f3-685f-4d2a-8467-05ffa2701601] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:07:33.202358  248107 system_pods.go:89] "registry-6b586f9694-hgvh6" [76662db3-ff4c-4ca1-8587-5d8f12c77a66] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:07:33.202367  248107 system_pods.go:89] "registry-creds-764b6fb674-ckjls" [05e09078-15a0-4a10-bbcf-6ef46b064286] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:07:33.202378  248107 system_pods.go:89] "registry-proxy-9tkz8" [527b58a0-a1f0-4419-ac42-b4de22cf8ccb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:07:33.202387  248107 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6cbw4" [33beb311-2ed8-4dbd-a0e0-297d6eccc21a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:07:33.202396  248107 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f6qhx" [44385801-5893-4318-b18c-25f5dbedef16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:07:33.202405  248107 system_pods.go:89] "storage-provisioner" [5890e29d-b25e-40cb-ae66-27c0be7f0c73] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:07:33.202421  248107 system_pods.go:126] duration metric: took 17.082448ms to wait for k8s-apps to be running ...
	I1115 09:07:33.202437  248107 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:07:33.202507  248107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:07:33.220667  248107 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 09:07:33.220693  248107 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 09:07:33.226296  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:33.295766  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:33.309919  248107 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:07:33.309944  248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 09:07:33.359472  248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:07:33.608180  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:33.713189  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:33.717149  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:34.100515  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:34.254687  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:34.256283  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:34.602187  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:34.720040  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:34.721271  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:35.098173  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:35.213267  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:35.214996  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:35.597839  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:35.611666  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.449005846s)
	I1115 09:07:35.611712  248107 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.409176256s)
	I1115 09:07:35.611738  248107 system_svc.go:56] duration metric: took 2.40929762s WaitForService to wait for kubelet
	I1115 09:07:35.611749  248107 kubeadm.go:587] duration metric: took 12.762317792s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:07:35.611777  248107 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:07:35.611801  248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.252297884s)
	I1115 09:07:35.613086  248107 addons.go:480] Verifying addon gcp-auth=true in "addons-663794"
	I1115 09:07:35.614808  248107 out.go:179] * Verifying gcp-auth addon...
	I1115 09:07:35.616083  248107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 09:07:35.616106  248107 node_conditions.go:123] node cpu capacity is 2
	I1115 09:07:35.616121  248107 node_conditions.go:105] duration metric: took 4.337898ms to run NodePressure ...
	I1115 09:07:35.616135  248107 start.go:242] waiting for startup goroutines ...
	I1115 09:07:35.616857  248107 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 09:07:35.622909  248107 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 09:07:35.622935  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:35.716491  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:35.717682  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:36.096116  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:36.120094  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:36.211403  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:36.211933  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:36.595656  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:36.621021  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:36.711258  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:36.712572  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:37.097058  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:37.122934  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:37.221840  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:37.222106  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:37.597514  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:37.622993  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:37.724467  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:37.728757  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:38.098762  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:38.124957  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:38.213534  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:38.217091  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:38.596983  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:38.621993  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:38.712212  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:38.713704  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:39.096546  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:39.122295  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:39.215826  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:39.216132  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:39.594865  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:39.620922  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:39.712653  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:39.713631  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:40.097017  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:40.122394  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:40.210854  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:40.210899  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:40.596160  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:40.621226  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:40.711567  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:40.712578  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:41.096119  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:41.120992  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:41.212142  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:41.212163  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:41.595487  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:41.620728  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:41.711965  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:41.712158  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:42.095291  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:42.120584  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:42.211176  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:42.211282  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:42.595938  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:42.621049  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:42.711234  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:42.712875  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:43.094391  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:43.120358  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:43.210538  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:43.213000  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:43.595985  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:43.621877  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:43.715594  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:43.715678  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:44.098589  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:44.121896  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:44.211492  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:44.213357  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:44.595434  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:44.621806  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:44.711245  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:44.714288  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:45.097500  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:45.120874  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:45.213062  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:45.214423  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:45.683834  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:45.686540  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:45.787011  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:45.787674  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:46.097100  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:46.121567  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:46.210973  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:46.211377  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:46.597237  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:46.620934  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:46.713168  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:46.713396  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:47.096369  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:47.121346  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:47.211501  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:47.211537  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:47.595670  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:47.620987  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:47.713311  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:47.713719  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:48.095582  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:48.120705  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:48.212080  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:48.212195  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:48.594985  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:48.619634  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:48.713722  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:48.713890  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:49.095817  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:49.123336  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:49.211252  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:49.212341  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:49.597899  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:49.621064  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:49.712658  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:49.716676  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:50.096808  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:50.122166  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:50.213342  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:50.213343  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:50.597233  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:50.619969  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:50.715401  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:50.718000  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:51.096522  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:51.120541  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:51.216763  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:51.217353  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:51.595744  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:51.620869  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:51.713848  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:51.714820  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:52.254179  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:52.254362  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:52.254363  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:52.254465  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:52.597780  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:52.621647  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:52.711011  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:52.711936  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:53.095746  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:53.120615  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:53.214336  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:53.215844  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:53.595983  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:53.621577  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:53.710595  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:53.711146  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:54.096519  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:54.120902  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:54.213059  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:54.214228  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:54.598770  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:54.622464  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:54.713325  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:54.717081  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:55.095511  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:55.121508  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:55.211478  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:55.213957  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:55.596231  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:55.621677  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:55.712664  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:55.714843  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:56.098817  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:56.120815  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:56.212092  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:56.212111  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:56.597473  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:56.621159  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:56.712023  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:56.713774  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:57.095667  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:57.121149  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:57.210927  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:57.212046  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:57.594646  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:57.620186  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:57.710770  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:57.710944  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:58.095213  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:58.120925  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:58.213230  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:58.214117  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:58.598021  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:58.622823  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:58.714407  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:58.714996  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:59.096062  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:59.121660  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:59.212263  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:07:59.213059  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:59.597112  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:07:59.623914  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:07:59.713033  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:07:59.714098  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:00.095401  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:00.120345  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:00.217052  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:00.222199  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:00.598089  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:00.622756  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:00.711947  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:00.712058  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:01.095693  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:01.122090  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:01.460510  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:01.461870  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:01.598364  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:01.622413  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:01.711895  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:01.712016  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:02.095338  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:02.120350  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:02.213836  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:02.214144  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:02.595588  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:02.620218  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:02.711125  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:02.712059  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:03.095567  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:03.120492  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:03.212387  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:03.217019  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:03.595180  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:03.620144  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:03.711665  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:03.714651  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:04.095680  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:04.121930  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:04.211657  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:04.212981  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:04.598786  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:04.621084  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:04.715845  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:04.719125  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:05.097048  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:05.121741  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:05.213039  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:05.213237  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:05.594709  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:05.620376  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:05.711989  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:05.713122  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:06.096134  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:06.121316  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:06.213099  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:06.220961  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:06.595754  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:06.621013  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:06.711584  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:06.712144  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:07.095204  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:07.119886  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:07.211588  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:08:07.212769  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:07.598224  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:07.621186  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:07.711095  248107 kapi.go:107] duration metric: took 36.003677244s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 09:08:07.711741  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:08.095222  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:08.120582  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:08.211899  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:08.596155  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:08.620458  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:08.710876  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:09.097030  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:09.122479  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:09.211889  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:09.596649  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:09.620327  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:09.711414  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:10.094997  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:10.120753  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:10.211867  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:10.599009  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:10.621628  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:10.711767  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:11.096346  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:11.124858  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:11.211193  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:11.595681  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:11.620333  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:11.711963  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:12.099398  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:12.122003  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:12.213825  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:12.597037  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:12.621146  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:12.712034  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:13.094945  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:13.124482  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:13.214370  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:13.595293  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:13.622365  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:13.713036  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:14.099343  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:14.122417  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:14.216611  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:14.832728  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:14.832884  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:14.833013  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:15.097973  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:15.120257  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:15.210968  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:15.606661  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:15.624260  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:15.712940  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:16.104848  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:16.124037  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:16.213041  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:16.595885  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:16.620876  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:16.711520  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:17.095654  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:17.121042  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:17.211248  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:17.594897  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:17.620841  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:17.711246  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:18.094850  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:18.122511  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:18.211267  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:18.596407  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:18.619894  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:18.711279  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:19.095940  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:19.124812  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:19.212774  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:19.595295  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:19.620382  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:19.710734  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:20.095578  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:20.120624  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:20.211067  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:20.599553  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:20.700544  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:20.711118  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:21.098651  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:21.124164  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:21.213513  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:21.595834  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:21.621694  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:21.710994  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:22.095590  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:22.121979  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:22.212931  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:22.602197  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:22.620121  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:22.716733  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:23.099660  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:23.121953  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:23.213365  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:23.596526  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:23.622522  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:23.714682  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:24.095381  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:24.120057  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:24.211553  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:24.595532  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:24.620787  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:24.712468  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:25.096457  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:25.122485  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:25.211917  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:25.596864  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:25.623016  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:25.712778  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:26.096065  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:26.120681  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:26.212665  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:26.596023  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:26.621030  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:26.713038  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:27.097069  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:27.121661  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:27.212593  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:27.595255  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:27.620031  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:27.711335  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:28.104793  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:28.123971  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:28.212791  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:28.595405  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:28.620579  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:28.712515  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:29.097683  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:29.197420  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:29.213871  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:29.595867  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:29.623778  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:29.711262  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:30.102495  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:30.124019  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:30.211995  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:30.598077  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:30.619671  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:30.711024  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:31.096049  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:31.123811  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:31.212144  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:31.596926  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:31.622938  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:31.713166  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:32.100712  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:32.121367  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:32.217342  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:32.595319  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:32.621094  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:32.713751  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:33.102838  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:33.124879  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:33.215738  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:33.600153  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:33.620073  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:33.712122  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:34.096703  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:34.121904  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:34.214120  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:34.594881  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:34.621650  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:34.711047  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:35.095087  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:35.120601  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:35.211164  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:35.597908  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:35.621382  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:35.710949  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:36.097236  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:36.122108  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:36.213324  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:36.596307  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:36.621065  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:36.711348  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:37.097111  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:37.121519  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:37.215055  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:37.600495  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:08:37.623157  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:37.711651  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:38.104501  248107 kapi.go:107] duration metric: took 1m5.01328995s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 09:08:38.127575  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:38.214834  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:38.624843  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:38.712568  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:39.125145  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:39.215821  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:39.622958  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:39.711746  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:40.120800  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:40.211225  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:40.620499  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:40.711199  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:41.121108  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:41.211302  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:41.625433  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:41.712212  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:42.120862  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:42.213108  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:42.620681  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:42.711799  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:43.120992  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:43.221876  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:43.621934  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:43.723145  248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:08:44.123036  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:44.212187  248107 kapi.go:107] duration metric: took 1m12.505049599s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 09:08:44.621283  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:45.120804  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:45.621260  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:46.121757  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:46.621844  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:47.122314  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:47.620380  248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:08:48.121786  248107 kapi.go:107] duration metric: took 1m12.504923687s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 09:08:48.123321  248107 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-663794 cluster.
	I1115 09:08:48.124701  248107 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 09:08:48.125904  248107 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 09:08:48.127125  248107 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, storage-provisioner, inspektor-gadget, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1115 09:08:48.128299  248107 addons.go:515] duration metric: took 1m25.27885159s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin registry-creds storage-provisioner inspektor-gadget ingress-dns cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1115 09:08:48.128348  248107 start.go:247] waiting for cluster config update ...
	I1115 09:08:48.128380  248107 start.go:256] writing updated cluster config ...
	I1115 09:08:48.128717  248107 ssh_runner.go:195] Run: rm -f paused
	I1115 09:08:48.145880  248107 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:08:48.221356  248107 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cm284" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:48.226968  248107 pod_ready.go:94] pod "coredns-66bc5c9577-cm284" is "Ready"
	I1115 09:08:48.226993  248107 pod_ready.go:86] duration metric: took 5.606037ms for pod "coredns-66bc5c9577-cm284" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:48.229298  248107 pod_ready.go:83] waiting for pod "etcd-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:48.233430  248107 pod_ready.go:94] pod "etcd-addons-663794" is "Ready"
	I1115 09:08:48.233470  248107 pod_ready.go:86] duration metric: took 4.150626ms for pod "etcd-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:48.235487  248107 pod_ready.go:83] waiting for pod "kube-apiserver-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:48.239516  248107 pod_ready.go:94] pod "kube-apiserver-addons-663794" is "Ready"
	I1115 09:08:48.239540  248107 pod_ready.go:86] duration metric: took 4.028368ms for pod "kube-apiserver-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:48.242027  248107 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:48.551159  248107 pod_ready.go:94] pod "kube-controller-manager-addons-663794" is "Ready"
	I1115 09:08:48.551185  248107 pod_ready.go:86] duration metric: took 309.140415ms for pod "kube-controller-manager-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:48.751092  248107 pod_ready.go:83] waiting for pod "kube-proxy-kjfgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:49.150351  248107 pod_ready.go:94] pod "kube-proxy-kjfgf" is "Ready"
	I1115 09:08:49.150376  248107 pod_ready.go:86] duration metric: took 399.248241ms for pod "kube-proxy-kjfgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:49.352696  248107 pod_ready.go:83] waiting for pod "kube-scheduler-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:49.750874  248107 pod_ready.go:94] pod "kube-scheduler-addons-663794" is "Ready"
	I1115 09:08:49.750919  248107 pod_ready.go:86] duration metric: took 398.195381ms for pod "kube-scheduler-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:08:49.750934  248107 pod_ready.go:40] duration metric: took 1.605004891s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:08:49.792841  248107 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:08:49.794671  248107 out.go:179] * Done! kubectl is now configured to use "addons-663794" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.603985746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763197915603828542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9371810-ae94-41d2-9306-9850da4a1ed8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.605210091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7070b9bc-65f3-4219-a68a-2ec23c59c7d1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.605276588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7070b9bc-65f3-4219-a68a-2ec23c59c7d1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.606414080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b8179eace54700cc3708fd8afd689c63ec6930105ac5a7c4bd9f1774974f81a,PodSandboxId:a1545119edac5d7d2e641b0db82abf21a40b2c2a3e1064da6493f6a3cf20714a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763197775405580219,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b013a75-814d-4176-8b62-830d8b345b7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fac1d9822d06822273a1d8fa3b17d5cd4246dbca85a33c0e2b1cff8ffdff53,PodSandboxId:6f4a005ba20f208844ebf692a2a34553898773a1a5d08aa55965951b2578dd04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763197734317265003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 011de042-57a4-4b3e-bb73-a8fb6b5af30b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fae8541a1e2c22112603818756d131c89fb54bb1cac4ed10f6094d64ed2078,PodSandboxId:5f299fe3db0e12f69a87a529f5393cce48aa0bfd3013c1cd70024c4c1146a155,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763197723485155012,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-pnxxs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 53d12e45-c2d8-4e25-97ef-a61c276e30fc,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2dc686e8d69ccf1dc652dc7a485939d7239139ae17c31427e39e3211578a11cc,PodSandboxId:8320cba6a5b9bdb0d371493571e072bd5c116e53e488630e9ce9c44679646102,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197708939344321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z6xbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb9b9b53-404c-4b9a-83bd-cac24a935cc0,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ef2926ff0429ae9a6a1f11486aa2d2d32224cc046edd45719b09fc45146d05,PodSandboxId:c0c91f0d190ca7a048386c77c31e3c27aada227e4a7fc511c0496e7142dfa6da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197696676125425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-msxbx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50043b77-ff68-42e1-bca4-45c0f89727aa,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f04c322f6ed151b3971655120f17e77c75e3572e7d46e3391b5f3351241c7607,PodSandboxId:d2cd97bf20becb6d96674d2aa432b6f71a4fe46f4b716f51eeee4c10ee45d23c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763197688864305367,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-t6qdh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 23460e79-de72-494a-8f7c-7a627e197764,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31788a34c6a9c5e267d52b084bf138c5e3503d0f2ef0b36cce944df9b006bd27,PodSandboxId:9480436cb50ca34c8eee940a27ca26b1e94537b193eeb5595cf82b65b331169e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763197676211378771,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25109e09-af9d-420d-b989-529552614336,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c0420aa1f982117d3cf16eca5557e5d042c05aca8856bfc07827f92f19f1c,PodSandboxId:e26fb9ea124685a62d0322338493a06e3312de468d48622ef9624bf42c187c12,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763197653294638445,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqpn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0adea6d-3b3e-41d2-8340-2d42b53060e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7,PodSandboxId:e5df1977dc9313dd2608547c3585ba607c
915c3701f6d9d4fee9ca30504f6770,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763197652815511621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5890e29d-b25e-40cb-ae66-27c0be7f0c73,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57,PodSandboxId:c53566f7938894374045116450f5596e17d7d022a997bc
48679867fe0d94b498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763197643984322631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-cm284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ab3d77-85ec-40f3-afff-0a20ae3716f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0,PodSandboxId:63f9a0a57163f76d0008f12025b3763402d0267281274f24f0b6302b395c576f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763197643074895057,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjfgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eeef006-089f-401e-956f-df7c8c9d9a44,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39,PodSandboxId:45c3067eb99e6cb26149ecad26e18cacf94df4ec42a82f198bfe7ce18da80167,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763197631742078737,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055e84e2428ddf42f20dbd528dd611a3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac,PodSandboxId:5e5f8f18eb1905ab2202a68599f5c1aa1884c0c13f66f646fa733d775d221aca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763197631683823512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cf9acbe4bd299e3f9ca6fed8a38
31b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e,PodSandboxId:64aa18e4443823c06c4aeda762900c6b5a10849f2c5f1fb15c9d14b1861459c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763197631659541771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663794,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 77d36b5990749e5bfb68424df61b6733,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e,PodSandboxId:3e81ea99733dfa782d03d361d45bbca869692ebbbd2ede7ae2689a07c403caaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763197631670584533,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad1d6ae4012d5374cd73b293ff20dcd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7070b9bc-65f3-4219-a68a-2ec23c59c7d1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.649582307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9d0d454-da50-488f-8174-c61e98f3fbc5 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.649705946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9d0d454-da50-488f-8174-c61e98f3fbc5 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.651237009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b38abff0-d3be-49a0-af17-edff86077ea9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.652764201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763197915652725637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b38abff0-d3be-49a0-af17-edff86077ea9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.653714377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e5b1cd2-c469-498f-ab47-c241eb35f9eb name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.653777579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e5b1cd2-c469-498f-ab47-c241eb35f9eb name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.654198026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b8179eace54700cc3708fd8afd689c63ec6930105ac5a7c4bd9f1774974f81a,PodSandboxId:a1545119edac5d7d2e641b0db82abf21a40b2c2a3e1064da6493f6a3cf20714a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763197775405580219,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b013a75-814d-4176-8b62-830d8b345b7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fac1d9822d06822273a1d8fa3b17d5cd4246dbca85a33c0e2b1cff8ffdff53,PodSandboxId:6f4a005ba20f208844ebf692a2a34553898773a1a5d08aa55965951b2578dd04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763197734317265003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 011de042-57a4-4b3e-bb73-a8fb6b5af30b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fae8541a1e2c22112603818756d131c89fb54bb1cac4ed10f6094d64ed2078,PodSandboxId:5f299fe3db0e12f69a87a529f5393cce48aa0bfd3013c1cd70024c4c1146a155,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763197723485155012,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-pnxxs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 53d12e45-c2d8-4e25-97ef-a61c276e30fc,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2dc686e8d69ccf1dc652dc7a485939d7239139ae17c31427e39e3211578a11cc,PodSandboxId:8320cba6a5b9bdb0d371493571e072bd5c116e53e488630e9ce9c44679646102,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197708939344321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z6xbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb9b9b53-404c-4b9a-83bd-cac24a935cc0,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ef2926ff0429ae9a6a1f11486aa2d2d32224cc046edd45719b09fc45146d05,PodSandboxId:c0c91f0d190ca7a048386c77c31e3c27aada227e4a7fc511c0496e7142dfa6da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197696676125425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-msxbx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50043b77-ff68-42e1-bca4-45c0f89727aa,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f04c322f6ed151b3971655120f17e77c75e3572e7d46e3391b5f3351241c7607,PodSandboxId:d2cd97bf20becb6d96674d2aa432b6f71a4fe46f4b716f51eeee4c10ee45d23c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763197688864305367,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-t6qdh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 23460e79-de72-494a-8f7c-7a627e197764,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31788a34c6a9c5e267d52b084bf138c5e3503d0f2ef0b36cce944df9b006bd27,PodSandboxId:9480436cb50ca34c8eee940a27ca26b1e94537b193eeb5595cf82b65b331169e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763197676211378771,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25109e09-af9d-420d-b989-529552614336,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c0420aa1f982117d3cf16eca5557e5d042c05aca8856bfc07827f92f19f1c,PodSandboxId:e26fb9ea124685a62d0322338493a06e3312de468d48622ef9624bf42c187c12,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763197653294638445,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqpn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0adea6d-3b3e-41d2-8340-2d42b53060e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7,PodSandboxId:e5df1977dc9313dd2608547c3585ba607c
915c3701f6d9d4fee9ca30504f6770,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763197652815511621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5890e29d-b25e-40cb-ae66-27c0be7f0c73,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57,PodSandboxId:c53566f7938894374045116450f5596e17d7d022a997bc
48679867fe0d94b498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763197643984322631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-cm284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ab3d77-85ec-40f3-afff-0a20ae3716f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0,PodSandboxId:63f9a0a57163f76d0008f12025b3763402d0267281274f24f0b6302b395c576f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763197643074895057,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjfgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eeef006-089f-401e-956f-df7c8c9d9a44,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39,PodSandboxId:45c3067eb99e6cb26149ecad26e18cacf94df4ec42a82f198bfe7ce18da80167,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763197631742078737,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055e84e2428ddf42f20dbd528dd611a3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac,PodSandboxId:5e5f8f18eb1905ab2202a68599f5c1aa1884c0c13f66f646fa733d775d221aca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763197631683823512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cf9acbe4bd299e3f9ca6fed8a38
31b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e,PodSandboxId:64aa18e4443823c06c4aeda762900c6b5a10849f2c5f1fb15c9d14b1861459c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763197631659541771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663794,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 77d36b5990749e5bfb68424df61b6733,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e,PodSandboxId:3e81ea99733dfa782d03d361d45bbca869692ebbbd2ede7ae2689a07c403caaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763197631670584533,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad1d6ae4012d5374cd73b293ff20dcd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e5b1cd2-c469-498f-ab47-c241eb35f9eb name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.690409063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24f50b08-e237-4140-87c5-8c367cd24082 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.690498441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24f50b08-e237-4140-87c5-8c367cd24082 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.692136043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a02caaa-635a-444e-9c9c-a84adb45897b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.693422947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763197915693393878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a02caaa-635a-444e-9c9c-a84adb45897b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.694285945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62f9f9a8-803c-49b7-95a0-5ada02639710 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.694420061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62f9f9a8-803c-49b7-95a0-5ada02639710 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.694993471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b8179eace54700cc3708fd8afd689c63ec6930105ac5a7c4bd9f1774974f81a,PodSandboxId:a1545119edac5d7d2e641b0db82abf21a40b2c2a3e1064da6493f6a3cf20714a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763197775405580219,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b013a75-814d-4176-8b62-830d8b345b7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fac1d9822d06822273a1d8fa3b17d5cd4246dbca85a33c0e2b1cff8ffdff53,PodSandboxId:6f4a005ba20f208844ebf692a2a34553898773a1a5d08aa55965951b2578dd04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763197734317265003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 011de042-57a4-4b3e-bb73-a8fb6b5af30b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fae8541a1e2c22112603818756d131c89fb54bb1cac4ed10f6094d64ed2078,PodSandboxId:5f299fe3db0e12f69a87a529f5393cce48aa0bfd3013c1cd70024c4c1146a155,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763197723485155012,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-pnxxs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 53d12e45-c2d8-4e25-97ef-a61c276e30fc,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2dc686e8d69ccf1dc652dc7a485939d7239139ae17c31427e39e3211578a11cc,PodSandboxId:8320cba6a5b9bdb0d371493571e072bd5c116e53e488630e9ce9c44679646102,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197708939344321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z6xbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb9b9b53-404c-4b9a-83bd-cac24a935cc0,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ef2926ff0429ae9a6a1f11486aa2d2d32224cc046edd45719b09fc45146d05,PodSandboxId:c0c91f0d190ca7a048386c77c31e3c27aada227e4a7fc511c0496e7142dfa6da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197696676125425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-msxbx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50043b77-ff68-42e1-bca4-45c0f89727aa,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f04c322f6ed151b3971655120f17e77c75e3572e7d46e3391b5f3351241c7607,PodSandboxId:d2cd97bf20becb6d96674d2aa432b6f71a4fe46f4b716f51eeee4c10ee45d23c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763197688864305367,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-t6qdh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 23460e79-de72-494a-8f7c-7a627e197764,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31788a34c6a9c5e267d52b084bf138c5e3503d0f2ef0b36cce944df9b006bd27,PodSandboxId:9480436cb50ca34c8eee940a27ca26b1e94537b193eeb5595cf82b65b331169e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763197676211378771,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25109e09-af9d-420d-b989-529552614336,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c0420aa1f982117d3cf16eca5557e5d042c05aca8856bfc07827f92f19f1c,PodSandboxId:e26fb9ea124685a62d0322338493a06e3312de468d48622ef9624bf42c187c12,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763197653294638445,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqpn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0adea6d-3b3e-41d2-8340-2d42b53060e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7,PodSandboxId:e5df1977dc9313dd2608547c3585ba607c
915c3701f6d9d4fee9ca30504f6770,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763197652815511621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5890e29d-b25e-40cb-ae66-27c0be7f0c73,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57,PodSandboxId:c53566f7938894374045116450f5596e17d7d022a997bc
48679867fe0d94b498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763197643984322631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-cm284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ab3d77-85ec-40f3-afff-0a20ae3716f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0,PodSandboxId:63f9a0a57163f76d0008f12025b3763402d0267281274f24f0b6302b395c576f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763197643074895057,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjfgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eeef006-089f-401e-956f-df7c8c9d9a44,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39,PodSandboxId:45c3067eb99e6cb26149ecad26e18cacf94df4ec42a82f198bfe7ce18da80167,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763197631742078737,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055e84e2428ddf42f20dbd528dd611a3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac,PodSandboxId:5e5f8f18eb1905ab2202a68599f5c1aa1884c0c13f66f646fa733d775d221aca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763197631683823512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cf9acbe4bd299e3f9ca6fed8a38
31b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e,PodSandboxId:64aa18e4443823c06c4aeda762900c6b5a10849f2c5f1fb15c9d14b1861459c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763197631659541771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663794,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 77d36b5990749e5bfb68424df61b6733,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e,PodSandboxId:3e81ea99733dfa782d03d361d45bbca869692ebbbd2ede7ae2689a07c403caaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763197631670584533,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad1d6ae4012d5374cd73b293ff20dcd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62f9f9a8-803c-49b7-95a0-5ada02639710 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.730301025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d10e4b77-9345-468a-973a-6fce9e4e488f name=/runtime.v1.RuntimeService/Version
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.730464175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d10e4b77-9345-468a-973a-6fce9e4e488f name=/runtime.v1.RuntimeService/Version
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.731729561Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afad5861-cdd3-4342-926b-0590e4bb3153 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.733107281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763197915733074290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afad5861-cdd3-4342-926b-0590e4bb3153 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.733765055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26ad6cdc-04d5-4ca3-a6ac-30102f4b2835 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.733832817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26ad6cdc-04d5-4ca3-a6ac-30102f4b2835 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.734296368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b8179eace54700cc3708fd8afd689c63ec6930105ac5a7c4bd9f1774974f81a,PodSandboxId:a1545119edac5d7d2e641b0db82abf21a40b2c2a3e1064da6493f6a3cf20714a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763197775405580219,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b013a75-814d-4176-8b62-830d8b345b7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fac1d9822d06822273a1d8fa3b17d5cd4246dbca85a33c0e2b1cff8ffdff53,PodSandboxId:6f4a005ba20f208844ebf692a2a34553898773a1a5d08aa55965951b2578dd04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763197734317265003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 011de042-57a4-4b3e-bb73-a8fb6b5af30b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fae8541a1e2c22112603818756d131c89fb54bb1cac4ed10f6094d64ed2078,PodSandboxId:5f299fe3db0e12f69a87a529f5393cce48aa0bfd3013c1cd70024c4c1146a155,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763197723485155012,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-pnxxs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 53d12e45-c2d8-4e25-97ef-a61c276e30fc,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2dc686e8d69ccf1dc652dc7a485939d7239139ae17c31427e39e3211578a11cc,PodSandboxId:8320cba6a5b9bdb0d371493571e072bd5c116e53e488630e9ce9c44679646102,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197708939344321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z6xbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb9b9b53-404c-4b9a-83bd-cac24a935cc0,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ef2926ff0429ae9a6a1f11486aa2d2d32224cc046edd45719b09fc45146d05,PodSandboxId:c0c91f0d190ca7a048386c77c31e3c27aada227e4a7fc511c0496e7142dfa6da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197696676125425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-msxbx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50043b77-ff68-42e1-bca4-45c0f89727aa,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f04c322f6ed151b3971655120f17e77c75e3572e7d46e3391b5f3351241c7607,PodSandboxId:d2cd97bf20becb6d96674d2aa432b6f71a4fe46f4b716f51eeee4c10ee45d23c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763197688864305367,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-t6qdh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 23460e79-de72-494a-8f7c-7a627e197764,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31788a34c6a9c5e267d52b084bf138c5e3503d0f2ef0b36cce944df9b006bd27,PodSandboxId:9480436cb50ca34c8eee940a27ca26b1e94537b193eeb5595cf82b65b331169e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763197676211378771,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25109e09-af9d-420d-b989-529552614336,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c0420aa1f982117d3cf16eca5557e5d042c05aca8856bfc07827f92f19f1c,PodSandboxId:e26fb9ea124685a62d0322338493a06e3312de468d48622ef9624bf42c187c12,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763197653294638445,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqpn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0adea6d-3b3e-41d2-8340-2d42b53060e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7,PodSandboxId:e5df1977dc9313dd2608547c3585ba607c
915c3701f6d9d4fee9ca30504f6770,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763197652815511621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5890e29d-b25e-40cb-ae66-27c0be7f0c73,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57,PodSandboxId:c53566f7938894374045116450f5596e17d7d022a997bc
48679867fe0d94b498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763197643984322631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-cm284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ab3d77-85ec-40f3-afff-0a20ae3716f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0,PodSandboxId:63f9a0a57163f76d0008f12025b3763402d0267281274f24f0b6302b395c576f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763197643074895057,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjfgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eeef006-089f-401e-956f-df7c8c9d9a44,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39,PodSandboxId:45c3067eb99e6cb26149ecad26e18cacf94df4ec42a82f198bfe7ce18da80167,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763197631742078737,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055e84e2428ddf42f20dbd528dd611a3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac,PodSandboxId:5e5f8f18eb1905ab2202a68599f5c1aa1884c0c13f66f646fa733d775d221aca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763197631683823512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cf9acbe4bd299e3f9ca6fed8a38
31b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e,PodSandboxId:64aa18e4443823c06c4aeda762900c6b5a10849f2c5f1fb15c9d14b1861459c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763197631659541771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663794,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 77d36b5990749e5bfb68424df61b6733,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e,PodSandboxId:3e81ea99733dfa782d03d361d45bbca869692ebbbd2ede7ae2689a07c403caaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763197631670584533,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad1d6ae4012d5374cd73b293ff20dcd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26ad6cdc-04d5-4ca3-a6ac-30102f4b2835 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b8179eace547       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   a1545119edac5       nginx
	45fac1d9822d0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6f4a005ba20f2       busybox
	a3fae8541a1e2       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   5f299fe3db0e1       ingress-nginx-controller-6c8bf45fb-pnxxs
	2dc686e8d69cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              patch                     0                   8320cba6a5b9b       ingress-nginx-admission-patch-z6xbv
	e9ef2926ff042       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   c0c91f0d190ca       ingress-nginx-admission-create-msxbx
	f04c322f6ed15       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   d2cd97bf20bec       local-path-provisioner-648f6765c9-t6qdh
	31788a34c6a9c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   9480436cb50ca       kube-ingress-dns-minikube
	175c0420aa1f9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   e26fb9ea12468       amd-gpu-device-plugin-wqpn5
	546ccdaa0af30       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   e5df1977dc931       storage-provisioner
	27b130e9bb0ea       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   c53566f793889       coredns-66bc5c9577-cm284
	2d2949bab9cc7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   63f9a0a57163f       kube-proxy-kjfgf
	a2896ea62bda0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago       Running             kube-scheduler            0                   45c3067eb99e6       kube-scheduler-addons-663794
	95b051d486559       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago       Running             kube-controller-manager   0                   5e5f8f18eb190       kube-controller-manager-addons-663794
	41ad4b54254b4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago       Running             kube-apiserver            0                   3e81ea99733df       kube-apiserver-addons-663794
	e1157ae27d3b1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   64aa18e444382       etcd-addons-663794
	
	
	==> coredns [27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.26:44250 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001056286s
	[INFO] 10.244.0.26:46022 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018473s
	
	
	==> describe nodes <==
	Name:               addons-663794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-663794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=addons-663794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_07_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-663794
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:07:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-663794
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:11:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:09:51 +0000   Sat, 15 Nov 2025 09:07:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:09:51 +0000   Sat, 15 Nov 2025 09:07:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:09:51 +0000   Sat, 15 Nov 2025 09:07:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:09:51 +0000   Sat, 15 Nov 2025 09:07:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    addons-663794
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 39d0412532a9467fac204c898cc459d3
	  System UUID:                39d04125-32a9-467f-ac20-4c898cc459d3
	  Boot ID:                    5ab78582-c99d-444d-b6bf-1f7065465677
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  default                     hello-world-app-5d498dc89-6vxps             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-pnxxs    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m24s
	  kube-system                 amd-gpu-device-plugin-wqpn5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 coredns-66bc5c9577-cm284                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m33s
	  kube-system                 etcd-addons-663794                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m38s
	  kube-system                 kube-apiserver-addons-663794                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-controller-manager-addons-663794       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-proxy-kjfgf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-scheduler-addons-663794                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  local-path-storage          local-path-provisioner-648f6765c9-t6qdh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s (x8 over 4m45s)  kubelet          Node addons-663794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s (x8 over 4m45s)  kubelet          Node addons-663794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s (x7 over 4m45s)  kubelet          Node addons-663794 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m38s                  kubelet          Node addons-663794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s                  kubelet          Node addons-663794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s                  kubelet          Node addons-663794 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m37s                  kubelet          Node addons-663794 status is now: NodeReady
	  Normal  RegisteredNode           4m34s                  node-controller  Node addons-663794 event: Registered Node addons-663794 in Controller
	
	
	==> dmesg <==
	[  +0.030977] kauditd_printk_skb: 293 callbacks suppressed
	[  +3.715080] kauditd_printk_skb: 404 callbacks suppressed
	[  +5.960734] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.521316] kauditd_printk_skb: 11 callbacks suppressed
	[Nov15 09:08] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.292908] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.069759] kauditd_printk_skb: 5 callbacks suppressed
	[  +3.192395] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.133658] kauditd_printk_skb: 116 callbacks suppressed
	[  +0.959743] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.000034] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.417162] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000078] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.322707] kauditd_printk_skb: 41 callbacks suppressed
	[Nov15 09:09] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.902259] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.000021] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.361622] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.669380] kauditd_printk_skb: 174 callbacks suppressed
	[  +0.687785] kauditd_printk_skb: 135 callbacks suppressed
	[  +0.000032] kauditd_printk_skb: 88 callbacks suppressed
	[  +7.592297] kauditd_printk_skb: 101 callbacks suppressed
	[Nov15 09:10] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.846231] kauditd_printk_skb: 41 callbacks suppressed
	[Nov15 09:11] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e] <==
	{"level":"warn","ts":"2025-11-15T09:08:14.824246Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.575485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:08:14.824265Z","caller":"traceutil/trace.go:172","msg":"trace[377924183] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:1020; }","duration":"217.597067ms","start":"2025-11-15T09:08:14.606663Z","end":"2025-11-15T09:08:14.824260Z","steps":["trace[377924183] 'agreement among raft nodes before linearized reading'  (duration: 217.551674ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:08:14.824350Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.776699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:08:14.824362Z","caller":"traceutil/trace.go:172","msg":"trace[767992002] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"118.789442ms","start":"2025-11-15T09:08:14.705569Z","end":"2025-11-15T09:08:14.824358Z","steps":["trace[767992002] 'agreement among raft nodes before linearized reading'  (duration: 118.767726ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:08:14.824454Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"210.210582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:08:14.824466Z","caller":"traceutil/trace.go:172","msg":"trace[1805512740] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"210.223166ms","start":"2025-11-15T09:08:14.614239Z","end":"2025-11-15T09:08:14.824462Z","steps":["trace[1805512740] 'agreement among raft nodes before linearized reading'  (duration: 210.203452ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:08:16.557562Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T09:08:16.253153Z","time spent":"304.406864ms","remote":"127.0.0.1:39208","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-11-15T09:08:27.062862Z","caller":"traceutil/trace.go:172","msg":"trace[1709609591] linearizableReadLoop","detail":"{readStateIndex:1091; appliedIndex:1091; }","duration":"145.495768ms","start":"2025-11-15T09:08:26.917344Z","end":"2025-11-15T09:08:27.062840Z","steps":["trace[1709609591] 'read index received'  (duration: 145.490502ms)","trace[1709609591] 'applied index is now lower than readState.Index'  (duration: 4.429µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T09:08:27.063091Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.675015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-15T09:08:27.063119Z","caller":"traceutil/trace.go:172","msg":"trace[1077825839] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1059; }","duration":"145.771199ms","start":"2025-11-15T09:08:26.917341Z","end":"2025-11-15T09:08:27.063112Z","steps":["trace[1077825839] 'agreement among raft nodes before linearized reading'  (duration: 145.597525ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:08:27.064586Z","caller":"traceutil/trace.go:172","msg":"trace[290919413] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"205.8024ms","start":"2025-11-15T09:08:26.858771Z","end":"2025-11-15T09:08:27.064573Z","steps":["trace[290919413] 'process raft request'  (duration: 205.163928ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:08:41.570742Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.708725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:08:41.570876Z","caller":"traceutil/trace.go:172","msg":"trace[1839882819] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:1167; }","duration":"251.88631ms","start":"2025-11-15T09:08:41.318974Z","end":"2025-11-15T09:08:41.570860Z","steps":["trace[1839882819] 'range keys from in-memory index tree'  (duration: 251.650651ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:09:15.666078Z","caller":"traceutil/trace.go:172","msg":"trace[509285057] transaction","detail":"{read_only:false; response_revision:1359; number_of_response:1; }","duration":"204.273294ms","start":"2025-11-15T09:09:15.461754Z","end":"2025-11-15T09:09:15.666028Z","steps":["trace[509285057] 'process raft request'  (duration: 204.13976ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:09:16.930969Z","caller":"traceutil/trace.go:172","msg":"trace[94193585] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1384; }","duration":"180.967168ms","start":"2025-11-15T09:09:16.749991Z","end":"2025-11-15T09:09:16.930958Z","steps":["trace[94193585] 'process raft request'  (duration: 180.823665ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:09:18.212152Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.603948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:09:18.212230Z","caller":"traceutil/trace.go:172","msg":"trace[828949300] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1394; }","duration":"113.713642ms","start":"2025-11-15T09:09:18.098503Z","end":"2025-11-15T09:09:18.212216Z","steps":["trace[828949300] 'range keys from in-memory index tree'  (duration: 113.50551ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:09:43.682190Z","caller":"traceutil/trace.go:172","msg":"trace[2142164999] transaction","detail":"{read_only:false; response_revision:1641; number_of_response:1; }","duration":"115.756208ms","start":"2025-11-15T09:09:43.566420Z","end":"2025-11-15T09:09:43.682176Z","steps":["trace[2142164999] 'process raft request'  (duration: 115.603856ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:10:06.364616Z","caller":"traceutil/trace.go:172","msg":"trace[1037207073] linearizableReadLoop","detail":"{readStateIndex:1804; appliedIndex:1804; }","duration":"256.834129ms","start":"2025-11-15T09:10:06.107753Z","end":"2025-11-15T09:10:06.364588Z","steps":["trace[1037207073] 'read index received'  (duration: 256.826377ms)","trace[1037207073] 'applied index is now lower than readState.Index'  (duration: 6.938µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T09:10:06.365462Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.874903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.78\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-11-15T09:10:06.365522Z","caller":"traceutil/trace.go:172","msg":"trace[1661939143] range","detail":"{range_begin:/registry/masterleases/192.168.39.78; range_end:; response_count:1; response_revision:1743; }","duration":"104.956768ms","start":"2025-11-15T09:10:06.260555Z","end":"2025-11-15T09:10:06.365512Z","steps":["trace[1661939143] 'agreement among raft nodes before linearized reading'  (duration: 104.770759ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:10:06.365589Z","caller":"traceutil/trace.go:172","msg":"trace[99288795] transaction","detail":"{read_only:false; response_revision:1743; number_of_response:1; }","duration":"379.762156ms","start":"2025-11-15T09:10:05.985805Z","end":"2025-11-15T09:10:06.365567Z","steps":["trace[99288795] 'process raft request'  (duration: 378.802432ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:10:06.365739Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T09:10:05.985720Z","time spent":"379.903993ms","remote":"127.0.0.1:39466","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1741 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-15T09:10:06.366697Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.948187ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T09:10:06.366721Z","caller":"traceutil/trace.go:172","msg":"trace[709321246] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1743; }","duration":"258.978183ms","start":"2025-11-15T09:10:06.107736Z","end":"2025-11-15T09:10:06.366714Z","steps":["trace[709321246] 'agreement among raft nodes before linearized reading'  (duration: 256.973844ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:56 up 5 min,  0 users,  load average: 0.40, 0.97, 0.50
	Linux addons-663794 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:08:04.338696       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.70.92:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.70.92:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.70.92:443: connect: connection refused" logger="UnhandledError"
	I1115 09:08:04.387422       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1115 09:09:01.608795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.78:8443->192.168.39.1:54710: use of closed network connection
	E1115 09:09:01.798605       1 conn.go:339] Error on socket receive: read tcp 192.168.39.78:8443->192.168.39.1:54748: use of closed network connection
	I1115 09:09:10.875528       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.193.91"}
	I1115 09:09:30.933482       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1115 09:09:31.106451       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.94.245"}
	I1115 09:09:51.387269       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1115 09:10:05.359146       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1115 09:10:08.567213       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:10:08.567463       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1115 09:10:08.603443       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:10:08.603639       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1115 09:10:08.613440       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:10:08.613485       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1115 09:10:08.628764       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:10:08.629461       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1115 09:10:08.736678       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1115 09:10:08.736785       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1115 09:10:09.614438       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1115 09:10:09.737802       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1115 09:10:09.857238       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1115 09:11:54.568441       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.225.159"}
	
	
	==> kube-controller-manager [95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac] <==
	E1115 09:10:19.407964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:10:20.171437       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:10:20.172460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1115 09:10:21.467475       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 09:10:21.467521       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:10:21.518544       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 09:10:21.518650       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1115 09:10:25.696382       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:10:25.697309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:10:27.783768       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:10:27.785118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:10:28.773093       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:10:28.774125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:10:48.104941       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:10:48.106128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:10:48.973734       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:10:48.974997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:10:49.178319       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:10:49.179248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:11:17.044533       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:11:17.045725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:11:23.241364       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:11:23.242507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1115 09:11:39.269593       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1115 09:11:39.270643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0] <==
	I1115 09:07:23.829221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:07:23.935880       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:07:23.935923       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.78"]
	E1115 09:07:23.936020       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:07:24.126269       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1115 09:07:24.126359       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1115 09:07:24.126387       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:07:24.268967       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:07:24.274748       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:07:24.274950       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:07:24.387412       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:07:24.388991       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:07:24.389532       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:07:24.391253       1 config.go:200] "Starting service config controller"
	I1115 09:07:24.404299       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:07:24.398735       1 config.go:309] "Starting node config controller"
	I1115 09:07:24.405931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:07:24.406597       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:07:24.404167       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:07:24.496942       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:07:24.505985       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:07:24.508476       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39] <==
	E1115 09:07:14.400290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:07:14.400426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:07:14.400428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:07:14.400546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:07:14.400577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:07:14.400729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:07:14.400743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:07:14.400816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:07:14.400982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:07:14.400979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:07:15.283923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:07:15.289442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:07:15.299089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:07:15.462481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:07:15.474626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:07:15.510284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:07:15.532222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:07:15.575368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:07:15.593513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:07:15.598324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:07:15.607760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:07:15.609995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:07:15.632790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:07:15.659298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1115 09:07:17.791499       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:10:19 addons-663794 kubelet[1494]: I1115 09:10:19.154169    1494 scope.go:117] "RemoveContainer" containerID="c6829847745f7047c1bc63fb8424a148760819355c00a3656c78baef2b3593d6"
	Nov 15 09:10:19 addons-663794 kubelet[1494]: I1115 09:10:19.276740    1494 scope.go:117] "RemoveContainer" containerID="fa5c06d12276ddd7e1d0cb996d9d162fdd1dfeb3bd565989804a51f0a133b537"
	Nov 15 09:10:19 addons-663794 kubelet[1494]: I1115 09:10:19.393332    1494 scope.go:117] "RemoveContainer" containerID="1ef13e73ed365e69801a6b7ca589b6fab8bcc0ea40e8500113254b888618fb06"
	Nov 15 09:10:27 addons-663794 kubelet[1494]: E1115 09:10:27.562941    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197827562174336  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:10:27 addons-663794 kubelet[1494]: E1115 09:10:27.562986    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197827562174336  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:10:34 addons-663794 kubelet[1494]: I1115 09:10:34.334462    1494 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wqpn5" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:10:37 addons-663794 kubelet[1494]: E1115 09:10:37.565333    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197837564885153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:10:37 addons-663794 kubelet[1494]: E1115 09:10:37.565358    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197837564885153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:10:47 addons-663794 kubelet[1494]: E1115 09:10:47.568331    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197847567728584  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:10:47 addons-663794 kubelet[1494]: E1115 09:10:47.568362    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197847567728584  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:10:57 addons-663794 kubelet[1494]: E1115 09:10:57.571420    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197857570926903  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:10:57 addons-663794 kubelet[1494]: E1115 09:10:57.571448    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197857570926903  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:07 addons-663794 kubelet[1494]: E1115 09:11:07.574911    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197867574315420  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:07 addons-663794 kubelet[1494]: E1115 09:11:07.574937    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197867574315420  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:17 addons-663794 kubelet[1494]: E1115 09:11:17.578434    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197877577715698  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:17 addons-663794 kubelet[1494]: E1115 09:11:17.578519    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197877577715698  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:27 addons-663794 kubelet[1494]: E1115 09:11:27.581370    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197887580840089  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:27 addons-663794 kubelet[1494]: E1115 09:11:27.581417    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197887580840089  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:29 addons-663794 kubelet[1494]: I1115 09:11:29.334026    1494 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:11:37 addons-663794 kubelet[1494]: I1115 09:11:37.338453    1494 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wqpn5" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:11:37 addons-663794 kubelet[1494]: E1115 09:11:37.585205    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197897584774738  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:37 addons-663794 kubelet[1494]: E1115 09:11:37.585230    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197897584774738  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:47 addons-663794 kubelet[1494]: E1115 09:11:47.589124    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197907588649386  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:47 addons-663794 kubelet[1494]: E1115 09:11:47.589473    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197907588649386  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 15 09:11:54 addons-663794 kubelet[1494]: I1115 09:11:54.615638    1494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49fv\" (UniqueName: \"kubernetes.io/projected/05b53b7c-b0c3-4d2a-97a7-a8393de1fdca-kube-api-access-n49fv\") pod \"hello-world-app-5d498dc89-6vxps\" (UID: \"05b53b7c-b0c3-4d2a-97a7-a8393de1fdca\") " pod="default/hello-world-app-5d498dc89-6vxps"
	
	
	==> storage-provisioner [546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7] <==
	W1115 09:11:30.820592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:32.824528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:32.834244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:34.837499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:34.843602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:36.847618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:36.853737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:38.857679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:38.862843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:40.866031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:40.878627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:42.882421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:42.889620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:44.892816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:44.899521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:46.902852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:46.908252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:48.912678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:48.919734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:50.922635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:50.926948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:52.931358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:52.937622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:54.942870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:54.952463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-663794 -n addons-663794
helpers_test.go:269: (dbg) Run:  kubectl --context addons-663794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-6vxps ingress-nginx-admission-create-msxbx ingress-nginx-admission-patch-z6xbv
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-663794 describe pod hello-world-app-5d498dc89-6vxps ingress-nginx-admission-create-msxbx ingress-nginx-admission-patch-z6xbv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-663794 describe pod hello-world-app-5d498dc89-6vxps ingress-nginx-admission-create-msxbx ingress-nginx-admission-patch-z6xbv: exit status 1 (74.625485ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-6vxps
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-663794/192.168.39.78
	Start Time:       Sat, 15 Nov 2025 09:11:54 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n49fv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n49fv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-6vxps to addons-663794
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-msxbx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z6xbv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-663794 describe pod hello-world-app-5d498dc89-6vxps ingress-nginx-admission-create-msxbx ingress-nginx-admission-patch-z6xbv: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable ingress-dns --alsologtostderr -v=1: (1.710136363s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable ingress --alsologtostderr -v=1: (7.711676595s)
--- FAIL: TestAddons/parallel/Ingress (155.58s)

                                                
                                    
x
+
TestPreload (133.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-759272 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1115 09:54:59.317580  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-759272 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m7.406152241s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-759272 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-759272 image pull gcr.io/k8s-minikube/busybox: (3.499101721s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-759272
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-759272: (6.799120583s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-759272 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-759272 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (52.617524465s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-759272 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-15 09:56:46.411396669 +0000 UTC m=+3041.941123201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-759272 -n test-preload-759272
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-759272 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-759272 logs -n 25: (1.108847977s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-635899 ssh -n multinode-635899-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │ 15 Nov 25 09:43 UTC │
	│ ssh     │ multinode-635899 ssh -n multinode-635899 sudo cat /home/docker/cp-test_multinode-635899-m03_multinode-635899.txt                                          │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │ 15 Nov 25 09:43 UTC │
	│ cp      │ multinode-635899 cp multinode-635899-m03:/home/docker/cp-test.txt multinode-635899-m02:/home/docker/cp-test_multinode-635899-m03_multinode-635899-m02.txt │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │ 15 Nov 25 09:43 UTC │
	│ ssh     │ multinode-635899 ssh -n multinode-635899-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │ 15 Nov 25 09:43 UTC │
	│ ssh     │ multinode-635899 ssh -n multinode-635899-m02 sudo cat /home/docker/cp-test_multinode-635899-m03_multinode-635899-m02.txt                                  │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │ 15 Nov 25 09:43 UTC │
	│ node    │ multinode-635899 node stop m03                                                                                                                            │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │ 15 Nov 25 09:43 UTC │
	│ node    │ multinode-635899 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:43 UTC │ 15 Nov 25 09:44 UTC │
	│ node    │ list -p multinode-635899                                                                                                                                  │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │                     │
	│ stop    │ -p multinode-635899                                                                                                                                       │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:44 UTC │ 15 Nov 25 09:47 UTC │
	│ start   │ -p multinode-635899 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:47 UTC │ 15 Nov 25 09:49 UTC │
	│ node    │ list -p multinode-635899                                                                                                                                  │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:49 UTC │                     │
	│ node    │ multinode-635899 node delete m03                                                                                                                          │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:49 UTC │ 15 Nov 25 09:49 UTC │
	│ stop    │ multinode-635899 stop                                                                                                                                     │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:49 UTC │ 15 Nov 25 09:52 UTC │
	│ start   │ -p multinode-635899 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:52 UTC │ 15 Nov 25 09:53 UTC │
	│ node    │ list -p multinode-635899                                                                                                                                  │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:53 UTC │                     │
	│ start   │ -p multinode-635899-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-635899-m02 │ jenkins │ v1.37.0 │ 15 Nov 25 09:53 UTC │                     │
	│ start   │ -p multinode-635899-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-635899-m03 │ jenkins │ v1.37.0 │ 15 Nov 25 09:53 UTC │ 15 Nov 25 09:54 UTC │
	│ node    │ add -p multinode-635899                                                                                                                                   │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:54 UTC │                     │
	│ delete  │ -p multinode-635899-m03                                                                                                                                   │ multinode-635899-m03 │ jenkins │ v1.37.0 │ 15 Nov 25 09:54 UTC │ 15 Nov 25 09:54 UTC │
	│ delete  │ -p multinode-635899                                                                                                                                       │ multinode-635899     │ jenkins │ v1.37.0 │ 15 Nov 25 09:54 UTC │ 15 Nov 25 09:54 UTC │
	│ start   │ -p test-preload-759272 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-759272  │ jenkins │ v1.37.0 │ 15 Nov 25 09:54 UTC │ 15 Nov 25 09:55 UTC │
	│ image   │ test-preload-759272 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-759272  │ jenkins │ v1.37.0 │ 15 Nov 25 09:55 UTC │ 15 Nov 25 09:55 UTC │
	│ stop    │ -p test-preload-759272                                                                                                                                    │ test-preload-759272  │ jenkins │ v1.37.0 │ 15 Nov 25 09:55 UTC │ 15 Nov 25 09:55 UTC │
	│ start   │ -p test-preload-759272 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-759272  │ jenkins │ v1.37.0 │ 15 Nov 25 09:55 UTC │ 15 Nov 25 09:56 UTC │
	│ image   │ test-preload-759272 image list                                                                                                                            │ test-preload-759272  │ jenkins │ v1.37.0 │ 15 Nov 25 09:56 UTC │ 15 Nov 25 09:56 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:55:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:55:53.652140  270048 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:55:53.652429  270048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:55:53.652441  270048 out.go:374] Setting ErrFile to fd 2...
	I1115 09:55:53.652471  270048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:55:53.652694  270048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:55:53.653223  270048 out.go:368] Setting JSON to false
	I1115 09:55:53.654161  270048 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9496,"bootTime":1763191058,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:55:53.654247  270048 start.go:143] virtualization: kvm guest
	I1115 09:55:53.656225  270048 out.go:179] * [test-preload-759272] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:55:53.657565  270048 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:55:53.657550  270048 notify.go:221] Checking for updates...
	I1115 09:55:53.658878  270048 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:55:53.660161  270048 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:55:53.661231  270048 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:55:53.662477  270048 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:55:53.663989  270048 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:55:53.665480  270048 config.go:182] Loaded profile config "test-preload-759272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1115 09:55:53.667031  270048 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1115 09:55:53.668191  270048 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:55:53.702582  270048 out.go:179] * Using the kvm2 driver based on existing profile
	I1115 09:55:53.703749  270048 start.go:309] selected driver: kvm2
	I1115 09:55:53.703763  270048 start.go:930] validating driver "kvm2" against &{Name:test-preload-759272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-759272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.153 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:55:53.703848  270048 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:55:53.704784  270048 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:55:53.704814  270048 cni.go:84] Creating CNI manager for ""
	I1115 09:55:53.704853  270048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:55:53.704895  270048 start.go:353] cluster config:
	{Name:test-preload-759272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-759272 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.153 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:55:53.704994  270048 iso.go:125] acquiring lock: {Name:mkff40ddaa37657d9e8283719561f1fce12069ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:55:53.706680  270048 out.go:179] * Starting "test-preload-759272" primary control-plane node in "test-preload-759272" cluster
	I1115 09:55:53.707845  270048 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1115 09:55:54.615591  270048 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1115 09:55:54.615662  270048 cache.go:65] Caching tarball of preloaded images
	I1115 09:55:54.615870  270048 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1115 09:55:54.617958  270048 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1115 09:55:54.619014  270048 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 09:55:54.713862  270048 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1115 09:55:54.713913  270048 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1115 09:56:04.182283  270048 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1115 09:56:04.182416  270048 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/config.json ...
	I1115 09:56:04.183393  270048 start.go:360] acquireMachinesLock for test-preload-759272: {Name:mkd96327c544e60a7a5bc14d0ad542aaa69bb5ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1115 09:56:04.183485  270048 start.go:364] duration metric: took 62.729µs to acquireMachinesLock for "test-preload-759272"
	I1115 09:56:04.183503  270048 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:56:04.183509  270048 fix.go:54] fixHost starting: 
	I1115 09:56:04.185352  270048 fix.go:112] recreateIfNeeded on test-preload-759272: state=Stopped err=<nil>
	W1115 09:56:04.185374  270048 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:56:04.186769  270048 out.go:252] * Restarting existing kvm2 VM for "test-preload-759272" ...
	I1115 09:56:04.186811  270048 main.go:143] libmachine: starting domain...
	I1115 09:56:04.186821  270048 main.go:143] libmachine: ensuring networks are active...
	I1115 09:56:04.187557  270048 main.go:143] libmachine: Ensuring network default is active
	I1115 09:56:04.187963  270048 main.go:143] libmachine: Ensuring network mk-test-preload-759272 is active
	I1115 09:56:04.188497  270048 main.go:143] libmachine: getting domain XML...
	I1115 09:56:04.189522  270048 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-759272</name>
	  <uuid>3af07a21-9ebd-4583-be2b-10ef27d9a4ae</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/test-preload-759272/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/test-preload-759272/test-preload-759272.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:56:72:d1'/>
	      <source network='mk-test-preload-759272'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ce:4e:ff'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1115 09:56:05.449400  270048 main.go:143] libmachine: waiting for domain to start...
	I1115 09:56:05.450851  270048 main.go:143] libmachine: domain is now running
	I1115 09:56:05.450873  270048 main.go:143] libmachine: waiting for IP...
	I1115 09:56:05.451803  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:05.452331  270048 main.go:143] libmachine: domain test-preload-759272 has current primary IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:05.452350  270048 main.go:143] libmachine: found domain IP: 192.168.39.153
	I1115 09:56:05.452358  270048 main.go:143] libmachine: reserving static IP address...
	I1115 09:56:05.452740  270048 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-759272", mac: "52:54:00:56:72:d1", ip: "192.168.39.153"} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:54:51 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:05.452779  270048 main.go:143] libmachine: skip adding static IP to network mk-test-preload-759272 - found existing host DHCP lease matching {name: "test-preload-759272", mac: "52:54:00:56:72:d1", ip: "192.168.39.153"}
	I1115 09:56:05.452798  270048 main.go:143] libmachine: reserved static IP address 192.168.39.153 for domain test-preload-759272
	I1115 09:56:05.452806  270048 main.go:143] libmachine: waiting for SSH...
	I1115 09:56:05.452821  270048 main.go:143] libmachine: Getting to WaitForSSH function...
	I1115 09:56:05.455208  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:05.455608  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:54:51 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:05.455647  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:05.455820  270048 main.go:143] libmachine: Using SSH client type: native
	I1115 09:56:05.456233  270048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I1115 09:56:05.456255  270048 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1115 09:56:08.550747  270048 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.153:22: connect: no route to host
	I1115 09:56:14.630770  270048 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.153:22: connect: no route to host
	I1115 09:56:17.747183  270048 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:56:17.750747  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:17.751251  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:17.751294  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:17.751567  270048 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/config.json ...
	I1115 09:56:17.751818  270048 machine.go:94] provisionDockerMachine start ...
	I1115 09:56:17.754468  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:17.754852  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:17.754880  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:17.755064  270048 main.go:143] libmachine: Using SSH client type: native
	I1115 09:56:17.755252  270048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I1115 09:56:17.755262  270048 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:56:17.878998  270048 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1115 09:56:17.879035  270048 buildroot.go:166] provisioning hostname "test-preload-759272"
	I1115 09:56:17.882145  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:17.882636  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:17.882665  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:17.882808  270048 main.go:143] libmachine: Using SSH client type: native
	I1115 09:56:17.883000  270048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I1115 09:56:17.883011  270048 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-759272 && echo "test-preload-759272" | sudo tee /etc/hostname
	I1115 09:56:18.010690  270048 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-759272
	
	I1115 09:56:18.013827  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.014230  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:18.014259  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.014433  270048 main.go:143] libmachine: Using SSH client type: native
	I1115 09:56:18.014643  270048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I1115 09:56:18.014659  270048 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-759272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-759272/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-759272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:56:18.136178  270048 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:56:18.136223  270048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21895-243545/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-243545/.minikube}
	I1115 09:56:18.136284  270048 buildroot.go:174] setting up certificates
	I1115 09:56:18.136302  270048 provision.go:84] configureAuth start
	I1115 09:56:18.139342  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.139702  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:18.139734  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.141766  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.142108  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:18.142134  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.142257  270048 provision.go:143] copyHostCerts
	I1115 09:56:18.142339  270048 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-243545/.minikube/ca.pem, removing ...
	I1115 09:56:18.142352  270048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-243545/.minikube/ca.pem
	I1115 09:56:18.142438  270048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/ca.pem (1082 bytes)
	I1115 09:56:18.142638  270048 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-243545/.minikube/cert.pem, removing ...
	I1115 09:56:18.142650  270048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-243545/.minikube/cert.pem
	I1115 09:56:18.142698  270048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/cert.pem (1123 bytes)
	I1115 09:56:18.142786  270048 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-243545/.minikube/key.pem, removing ...
	I1115 09:56:18.142797  270048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-243545/.minikube/key.pem
	I1115 09:56:18.142834  270048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/key.pem (1675 bytes)
	I1115 09:56:18.142911  270048 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem org=jenkins.test-preload-759272 san=[127.0.0.1 192.168.39.153 localhost minikube test-preload-759272]
	I1115 09:56:18.372607  270048 provision.go:177] copyRemoteCerts
	I1115 09:56:18.372678  270048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:56:18.375288  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.375744  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:18.375778  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.375913  270048 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/test-preload-759272/id_rsa Username:docker}
	I1115 09:56:18.461696  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:56:18.490015  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 09:56:18.517863  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:56:18.544920  270048 provision.go:87] duration metric: took 408.602764ms to configureAuth
	I1115 09:56:18.544950  270048 buildroot.go:189] setting minikube options for container-runtime
	I1115 09:56:18.545143  270048 config.go:182] Loaded profile config "test-preload-759272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1115 09:56:18.548029  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.548355  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:18.548378  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.548561  270048 main.go:143] libmachine: Using SSH client type: native
	I1115 09:56:18.548752  270048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I1115 09:56:18.548767  270048 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:56:18.807131  270048 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:56:18.807158  270048 machine.go:97] duration metric: took 1.055324353s to provisionDockerMachine
	I1115 09:56:18.807169  270048 start.go:293] postStartSetup for "test-preload-759272" (driver="kvm2")
	I1115 09:56:18.807179  270048 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:56:18.807242  270048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:56:18.810196  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.810609  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:18.810638  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.810802  270048 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/test-preload-759272/id_rsa Username:docker}
	I1115 09:56:18.896863  270048 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:56:18.901461  270048 info.go:137] Remote host: Buildroot 2025.02
	I1115 09:56:18.901487  270048 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-243545/.minikube/addons for local assets ...
	I1115 09:56:18.901564  270048 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-243545/.minikube/files for local assets ...
	I1115 09:56:18.901675  270048 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-243545/.minikube/files/etc/ssl/certs/2474452.pem -> 2474452.pem in /etc/ssl/certs
	I1115 09:56:18.901793  270048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:56:18.913114  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/files/etc/ssl/certs/2474452.pem --> /etc/ssl/certs/2474452.pem (1708 bytes)
	I1115 09:56:18.944688  270048 start.go:296] duration metric: took 137.504444ms for postStartSetup
	I1115 09:56:18.944737  270048 fix.go:56] duration metric: took 14.761219958s for fixHost
	I1115 09:56:18.947715  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.948134  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:18.948154  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:18.948317  270048 main.go:143] libmachine: Using SSH client type: native
	I1115 09:56:18.948518  270048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I1115 09:56:18.948528  270048 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1115 09:56:19.061739  270048 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763200579.017795190
	
	I1115 09:56:19.061762  270048 fix.go:216] guest clock: 1763200579.017795190
	I1115 09:56:19.061770  270048 fix.go:229] Guest: 2025-11-15 09:56:19.01779519 +0000 UTC Remote: 2025-11-15 09:56:18.944741279 +0000 UTC m=+25.340427758 (delta=73.053911ms)
	I1115 09:56:19.061786  270048 fix.go:200] guest clock delta is within tolerance: 73.053911ms
	I1115 09:56:19.061791  270048 start.go:83] releasing machines lock for "test-preload-759272", held for 14.87829463s
	I1115 09:56:19.064583  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:19.065007  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:19.065036  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:19.065622  270048 ssh_runner.go:195] Run: cat /version.json
	I1115 09:56:19.065672  270048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:56:19.068563  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:19.068656  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:19.069034  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:19.069060  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:19.069065  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:19.069083  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:19.069232  270048 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/test-preload-759272/id_rsa Username:docker}
	I1115 09:56:19.069378  270048 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/test-preload-759272/id_rsa Username:docker}
	I1115 09:56:19.181407  270048 ssh_runner.go:195] Run: systemctl --version
	I1115 09:56:19.188087  270048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:56:19.334194  270048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:56:19.341045  270048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:56:19.341096  270048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:56:19.360949  270048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:56:19.360978  270048 start.go:496] detecting cgroup driver to use...
	I1115 09:56:19.361061  270048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:56:19.379437  270048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:56:19.396271  270048 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:56:19.396341  270048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:56:19.414486  270048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:56:19.431317  270048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:56:19.573588  270048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:56:19.789953  270048 docker.go:234] disabling docker service ...
	I1115 09:56:19.790035  270048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:56:19.806568  270048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:56:19.821222  270048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:56:19.973302  270048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:56:20.116571  270048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:56:20.132656  270048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:56:20.153988  270048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1115 09:56:20.154052  270048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:56:20.166333  270048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 09:56:20.166405  270048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:56:20.178282  270048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:56:20.190040  270048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:56:20.202177  270048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:56:20.214863  270048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:56:20.227003  270048 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:56:20.246177  270048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:56:20.258691  270048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:56:20.268631  270048 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1115 09:56:20.268702  270048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1115 09:56:20.288186  270048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:56:20.300982  270048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:56:20.442240  270048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:56:20.548787  270048 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:56:20.548889  270048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:56:20.554536  270048 start.go:564] Will wait 60s for crictl version
	I1115 09:56:20.554607  270048 ssh_runner.go:195] Run: which crictl
	I1115 09:56:20.559114  270048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1115 09:56:20.598206  270048 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1115 09:56:20.598291  270048 ssh_runner.go:195] Run: crio --version
	I1115 09:56:20.629889  270048 ssh_runner.go:195] Run: crio --version
	I1115 09:56:20.661664  270048 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1115 09:56:20.666016  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:20.666395  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:20.666422  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:20.666641  270048 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1115 09:56:20.671411  270048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:56:20.687246  270048 kubeadm.go:884] updating cluster {Name:test-preload-759272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-759272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.153 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:56:20.687419  270048 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1115 09:56:20.687524  270048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:56:20.730633  270048 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1115 09:56:20.730724  270048 ssh_runner.go:195] Run: which lz4
	I1115 09:56:20.735361  270048 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1115 09:56:20.740107  270048 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1115 09:56:20.740151  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1115 09:56:22.107778  270048 crio.go:462] duration metric: took 1.372462575s to copy over tarball
	I1115 09:56:22.107866  270048 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1115 09:56:23.782153  270048 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.674252583s)
	I1115 09:56:23.782192  270048 crio.go:469] duration metric: took 1.674379539s to extract the tarball
	I1115 09:56:23.782201  270048 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1115 09:56:23.822279  270048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:56:23.869401  270048 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:56:23.869436  270048 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:56:23.869465  270048 kubeadm.go:935] updating node { 192.168.39.153 8443 v1.32.0 crio true true} ...
	I1115 09:56:23.869585  270048 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-759272 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-759272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:56:23.869677  270048 ssh_runner.go:195] Run: crio config
	I1115 09:56:23.914672  270048 cni.go:84] Creating CNI manager for ""
	I1115 09:56:23.914698  270048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:56:23.914718  270048 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:56:23.914750  270048 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.153 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-759272 NodeName:test-preload-759272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:56:23.914859  270048 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-759272"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.153"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.153"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:56:23.914928  270048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1115 09:56:23.926716  270048 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:56:23.926797  270048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:56:23.937853  270048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1115 09:56:23.957309  270048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:56:23.976927  270048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1115 09:56:23.997515  270048 ssh_runner.go:195] Run: grep 192.168.39.153	control-plane.minikube.internal$ /etc/hosts
	I1115 09:56:24.001532  270048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:56:24.015830  270048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:56:24.161538  270048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:56:24.195946  270048 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272 for IP: 192.168.39.153
	I1115 09:56:24.195967  270048 certs.go:195] generating shared ca certs ...
	I1115 09:56:24.195984  270048 certs.go:227] acquiring lock for ca certs: {Name:mk5e9c8388448c40ecbfe3d7332e5965c3ae4b4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:56:24.196146  270048 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key
	I1115 09:56:24.196202  270048 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key
	I1115 09:56:24.196216  270048 certs.go:257] generating profile certs ...
	I1115 09:56:24.196327  270048 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/client.key
	I1115 09:56:24.196407  270048 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/apiserver.key.dba4f41d
	I1115 09:56:24.196480  270048 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/proxy-client.key
	I1115 09:56:24.196645  270048 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/247445.pem (1338 bytes)
	W1115 09:56:24.196688  270048 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-243545/.minikube/certs/247445_empty.pem, impossibly tiny 0 bytes
	I1115 09:56:24.196698  270048 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:56:24.196729  270048 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:56:24.196763  270048 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:56:24.196879  270048 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem (1675 bytes)
	I1115 09:56:24.196983  270048 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/files/etc/ssl/certs/2474452.pem (1708 bytes)
	I1115 09:56:24.197985  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:56:24.233956  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1115 09:56:24.276884  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:56:24.306748  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1115 09:56:24.334954  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 09:56:24.364374  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:56:24.391540  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:56:24.418678  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:56:24.446432  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/files/etc/ssl/certs/2474452.pem --> /usr/share/ca-certificates/2474452.pem (1708 bytes)
	I1115 09:56:24.473190  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:56:24.500074  270048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/certs/247445.pem --> /usr/share/ca-certificates/247445.pem (1338 bytes)
	I1115 09:56:24.528700  270048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:56:24.549417  270048 ssh_runner.go:195] Run: openssl version
	I1115 09:56:24.555920  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:56:24.568342  270048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:56:24.573204  270048 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:07 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:56:24.573261  270048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:56:24.580075  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:56:24.592140  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247445.pem && ln -fs /usr/share/ca-certificates/247445.pem /etc/ssl/certs/247445.pem"
	I1115 09:56:24.605162  270048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247445.pem
	I1115 09:56:24.610755  270048 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/247445.pem
	I1115 09:56:24.610825  270048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247445.pem
	I1115 09:56:24.617720  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247445.pem /etc/ssl/certs/51391683.0"
	I1115 09:56:24.629978  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2474452.pem && ln -fs /usr/share/ca-certificates/2474452.pem /etc/ssl/certs/2474452.pem"
	I1115 09:56:24.642479  270048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2474452.pem
	I1115 09:56:24.647562  270048 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/2474452.pem
	I1115 09:56:24.647626  270048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2474452.pem
	I1115 09:56:24.654989  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2474452.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:56:24.668656  270048 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:56:24.673870  270048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:56:24.681366  270048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:56:24.688474  270048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:56:24.695416  270048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:56:24.702644  270048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:56:24.709627  270048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:56:24.716325  270048 kubeadm.go:401] StartCluster: {Name:test-preload-759272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-759272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.153 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:56:24.716423  270048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:56:24.716484  270048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:56:24.752274  270048 cri.go:89] found id: ""
	I1115 09:56:24.752359  270048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:56:24.764269  270048 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 09:56:24.764297  270048 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 09:56:24.764351  270048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 09:56:24.777012  270048 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:56:24.777524  270048 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-759272" does not appear in /home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:56:24.777641  270048 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-243545/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-759272" cluster setting kubeconfig missing "test-preload-759272" context setting]
	I1115 09:56:24.777959  270048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/kubeconfig: {Name:mk85b3ca0ac5a906394239d54dc0b40d127f71ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:56:24.778484  270048 kapi.go:59] client config for test-preload-759272: &rest.Config{Host:"https://192.168.39.153:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/client.key", CAFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:56:24.778869  270048 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 09:56:24.778883  270048 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 09:56:24.778887  270048 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 09:56:24.778891  270048 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 09:56:24.778894  270048 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 09:56:24.779254  270048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 09:56:24.790075  270048 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.153
	I1115 09:56:24.790103  270048 kubeadm.go:1161] stopping kube-system containers ...
	I1115 09:56:24.790116  270048 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1115 09:56:24.790154  270048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:56:24.827421  270048 cri.go:89] found id: ""
	I1115 09:56:24.827535  270048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1115 09:56:24.846555  270048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:56:24.858278  270048 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:56:24.858297  270048 kubeadm.go:158] found existing configuration files:
	
	I1115 09:56:24.858343  270048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:56:24.868787  270048 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:56:24.868862  270048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:56:24.880074  270048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:56:24.890138  270048 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:56:24.890198  270048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:56:24.901573  270048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:56:24.912487  270048 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:56:24.912557  270048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:56:24.924035  270048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:56:24.934149  270048 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:56:24.934205  270048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:56:24.944900  270048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:56:24.956268  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:56:25.016397  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:56:26.109986  270048 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.093550071s)
	I1115 09:56:26.110050  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:56:26.374690  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:56:26.444494  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:56:26.520977  270048 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:56:26.521083  270048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:56:27.021520  270048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:56:27.521858  270048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:56:28.021412  270048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:56:28.521988  270048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:56:29.021311  270048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:56:29.051104  270048 api_server.go:72] duration metric: took 2.530141895s to wait for apiserver process to appear ...
	I1115 09:56:29.051142  270048 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:56:29.051166  270048 api_server.go:253] Checking apiserver healthz at https://192.168.39.153:8443/healthz ...
	I1115 09:56:31.050037  270048 api_server.go:279] https://192.168.39.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 09:56:31.050071  270048 api_server.go:103] status: https://192.168.39.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 09:56:31.050091  270048 api_server.go:253] Checking apiserver healthz at https://192.168.39.153:8443/healthz ...
	I1115 09:56:31.090313  270048 api_server.go:279] https://192.168.39.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 09:56:31.090345  270048 api_server.go:103] status: https://192.168.39.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 09:56:31.090362  270048 api_server.go:253] Checking apiserver healthz at https://192.168.39.153:8443/healthz ...
	I1115 09:56:31.165509  270048 api_server.go:279] https://192.168.39.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 09:56:31.165553  270048 api_server.go:103] status: https://192.168.39.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 09:56:31.552175  270048 api_server.go:253] Checking apiserver healthz at https://192.168.39.153:8443/healthz ...
	I1115 09:56:31.569801  270048 api_server.go:279] https://192.168.39.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 09:56:31.569826  270048 api_server.go:103] status: https://192.168.39.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 09:56:32.051492  270048 api_server.go:253] Checking apiserver healthz at https://192.168.39.153:8443/healthz ...
	I1115 09:56:32.057898  270048 api_server.go:279] https://192.168.39.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 09:56:32.057927  270048 api_server.go:103] status: https://192.168.39.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 09:56:32.551319  270048 api_server.go:253] Checking apiserver healthz at https://192.168.39.153:8443/healthz ...
	I1115 09:56:32.557546  270048 api_server.go:279] https://192.168.39.153:8443/healthz returned 200:
	ok
	I1115 09:56:32.567740  270048 api_server.go:141] control plane version: v1.32.0
	I1115 09:56:32.567771  270048 api_server.go:131] duration metric: took 3.516622871s to wait for apiserver health ...
	I1115 09:56:32.567780  270048 cni.go:84] Creating CNI manager for ""
	I1115 09:56:32.567787  270048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:56:32.569734  270048 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1115 09:56:32.571003  270048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1115 09:56:32.584391  270048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1115 09:56:32.635709  270048 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:56:32.641191  270048 system_pods.go:59] 7 kube-system pods found
	I1115 09:56:32.641265  270048 system_pods.go:61] "coredns-668d6bf9bc-qn5cx" [737b6b5f-83eb-4d41-9c7b-5bac63e48416] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:56:32.641288  270048 system_pods.go:61] "etcd-test-preload-759272" [efd542c9-6678-4252-8db1-aca9917c1663] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:56:32.641306  270048 system_pods.go:61] "kube-apiserver-test-preload-759272" [41cdd570-2775-4bab-bd16-1260960eeb56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:56:32.641321  270048 system_pods.go:61] "kube-controller-manager-test-preload-759272" [5d57b92c-34c8-4066-94d5-bcb2e66ec325] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:56:32.641337  270048 system_pods.go:61] "kube-proxy-d9gcp" [060f71f3-2799-497d-8285-2937897233ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:56:32.641353  270048 system_pods.go:61] "kube-scheduler-test-preload-759272" [ad254c7c-e9a2-450e-9d1f-e4c36f787a90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:56:32.641369  270048 system_pods.go:61] "storage-provisioner" [53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:56:32.641386  270048 system_pods.go:74] duration metric: took 5.649914ms to wait for pod list to return data ...
	I1115 09:56:32.641401  270048 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:56:32.652739  270048 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 09:56:32.652780  270048 node_conditions.go:123] node cpu capacity is 2
	I1115 09:56:32.652799  270048 node_conditions.go:105] duration metric: took 11.389535ms to run NodePressure ...
	I1115 09:56:32.652882  270048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:56:32.936564  270048 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1115 09:56:32.939874  270048 kubeadm.go:744] kubelet initialised
	I1115 09:56:32.939895  270048 kubeadm.go:745] duration metric: took 3.300556ms waiting for restarted kubelet to initialise ...
	I1115 09:56:32.939913  270048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:56:32.956889  270048 ops.go:34] apiserver oom_adj: -16
	I1115 09:56:32.956919  270048 kubeadm.go:602] duration metric: took 8.192612732s to restartPrimaryControlPlane
	I1115 09:56:32.956934  270048 kubeadm.go:403] duration metric: took 8.240616557s to StartCluster
	I1115 09:56:32.956958  270048 settings.go:142] acquiring lock: {Name:mk00f9aa5a46ce077bf17ee5efb58b1b4c2cdbac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:56:32.957052  270048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:56:32.957652  270048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/kubeconfig: {Name:mk85b3ca0ac5a906394239d54dc0b40d127f71ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:56:32.957952  270048 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.153 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:56:32.958150  270048 config.go:182] Loaded profile config "test-preload-759272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1115 09:56:32.958085  270048 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:56:32.958195  270048 addons.go:70] Setting storage-provisioner=true in profile "test-preload-759272"
	I1115 09:56:32.958503  270048 addons.go:239] Setting addon storage-provisioner=true in "test-preload-759272"
	W1115 09:56:32.958549  270048 addons.go:248] addon storage-provisioner should already be in state true
	I1115 09:56:32.958544  270048 addons.go:70] Setting default-storageclass=true in profile "test-preload-759272"
	I1115 09:56:32.958570  270048 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-759272"
	I1115 09:56:32.958583  270048 host.go:66] Checking if "test-preload-759272" exists ...
	I1115 09:56:32.960332  270048 out.go:179] * Verifying Kubernetes components...
	I1115 09:56:32.961676  270048 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:56:32.961748  270048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:56:32.961764  270048 kapi.go:59] client config for test-preload-759272: &rest.Config{Host:"https://192.168.39.153:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/client.key", CAFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:56:32.961991  270048 addons.go:239] Setting addon default-storageclass=true in "test-preload-759272"
	W1115 09:56:32.962003  270048 addons.go:248] addon default-storageclass should already be in state true
	I1115 09:56:32.962019  270048 host.go:66] Checking if "test-preload-759272" exists ...
	I1115 09:56:32.963297  270048 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:56:32.963317  270048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:56:32.963696  270048 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:56:32.963714  270048 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:56:32.966087  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:32.966288  270048 main.go:143] libmachine: domain test-preload-759272 has defined MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:32.966492  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:32.966517  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:32.966664  270048 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/test-preload-759272/id_rsa Username:docker}
	I1115 09:56:32.966683  270048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:72:d1", ip: ""} in network mk-test-preload-759272: {Iface:virbr1 ExpiryTime:2025-11-15 10:56:15 +0000 UTC Type:0 Mac:52:54:00:56:72:d1 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:test-preload-759272 Clientid:01:52:54:00:56:72:d1}
	I1115 09:56:32.966714  270048 main.go:143] libmachine: domain test-preload-759272 has defined IP address 192.168.39.153 and MAC address 52:54:00:56:72:d1 in network mk-test-preload-759272
	I1115 09:56:32.966900  270048 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/test-preload-759272/id_rsa Username:docker}
	I1115 09:56:33.162806  270048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:56:33.186681  270048 node_ready.go:35] waiting up to 6m0s for node "test-preload-759272" to be "Ready" ...
	I1115 09:56:33.333705  270048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:56:33.338523  270048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:56:33.976044  270048 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 09:56:33.977743  270048 addons.go:515] duration metric: took 1.019657328s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1115 09:56:35.190678  270048 node_ready.go:57] node "test-preload-759272" has "Ready":"False" status (will retry)
	W1115 09:56:37.690894  270048 node_ready.go:57] node "test-preload-759272" has "Ready":"False" status (will retry)
	W1115 09:56:40.190548  270048 node_ready.go:57] node "test-preload-759272" has "Ready":"False" status (will retry)
	I1115 09:56:41.690551  270048 node_ready.go:49] node "test-preload-759272" is "Ready"
	I1115 09:56:41.690627  270048 node_ready.go:38] duration metric: took 8.503844691s for node "test-preload-759272" to be "Ready" ...
	I1115 09:56:41.690647  270048 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:56:41.690699  270048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:56:41.711290  270048 api_server.go:72] duration metric: took 8.753290638s to wait for apiserver process to appear ...
	I1115 09:56:41.711323  270048 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:56:41.711342  270048 api_server.go:253] Checking apiserver healthz at https://192.168.39.153:8443/healthz ...
	I1115 09:56:41.716702  270048 api_server.go:279] https://192.168.39.153:8443/healthz returned 200:
	ok
	I1115 09:56:41.717776  270048 api_server.go:141] control plane version: v1.32.0
	I1115 09:56:41.717798  270048 api_server.go:131] duration metric: took 6.467558ms to wait for apiserver health ...
	I1115 09:56:41.717810  270048 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:56:41.721490  270048 system_pods.go:59] 7 kube-system pods found
	I1115 09:56:41.721517  270048 system_pods.go:61] "coredns-668d6bf9bc-qn5cx" [737b6b5f-83eb-4d41-9c7b-5bac63e48416] Running
	I1115 09:56:41.721529  270048 system_pods.go:61] "etcd-test-preload-759272" [efd542c9-6678-4252-8db1-aca9917c1663] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:56:41.721538  270048 system_pods.go:61] "kube-apiserver-test-preload-759272" [41cdd570-2775-4bab-bd16-1260960eeb56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:56:41.721546  270048 system_pods.go:61] "kube-controller-manager-test-preload-759272" [5d57b92c-34c8-4066-94d5-bcb2e66ec325] Running
	I1115 09:56:41.721551  270048 system_pods.go:61] "kube-proxy-d9gcp" [060f71f3-2799-497d-8285-2937897233ec] Running
	I1115 09:56:41.721556  270048 system_pods.go:61] "kube-scheduler-test-preload-759272" [ad254c7c-e9a2-450e-9d1f-e4c36f787a90] Running
	I1115 09:56:41.721565  270048 system_pods.go:61] "storage-provisioner" [53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4] Running
	I1115 09:56:41.721573  270048 system_pods.go:74] duration metric: took 3.755845ms to wait for pod list to return data ...
	I1115 09:56:41.721585  270048 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:56:41.723918  270048 default_sa.go:45] found service account: "default"
	I1115 09:56:41.723935  270048 default_sa.go:55] duration metric: took 2.342483ms for default service account to be created ...
	I1115 09:56:41.723945  270048 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:56:41.726250  270048 system_pods.go:86] 7 kube-system pods found
	I1115 09:56:41.726274  270048 system_pods.go:89] "coredns-668d6bf9bc-qn5cx" [737b6b5f-83eb-4d41-9c7b-5bac63e48416] Running
	I1115 09:56:41.726283  270048 system_pods.go:89] "etcd-test-preload-759272" [efd542c9-6678-4252-8db1-aca9917c1663] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:56:41.726289  270048 system_pods.go:89] "kube-apiserver-test-preload-759272" [41cdd570-2775-4bab-bd16-1260960eeb56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:56:41.726294  270048 system_pods.go:89] "kube-controller-manager-test-preload-759272" [5d57b92c-34c8-4066-94d5-bcb2e66ec325] Running
	I1115 09:56:41.726298  270048 system_pods.go:89] "kube-proxy-d9gcp" [060f71f3-2799-497d-8285-2937897233ec] Running
	I1115 09:56:41.726301  270048 system_pods.go:89] "kube-scheduler-test-preload-759272" [ad254c7c-e9a2-450e-9d1f-e4c36f787a90] Running
	I1115 09:56:41.726304  270048 system_pods.go:89] "storage-provisioner" [53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4] Running
	I1115 09:56:41.726309  270048 system_pods.go:126] duration metric: took 2.359802ms to wait for k8s-apps to be running ...
	I1115 09:56:41.726315  270048 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:56:41.726358  270048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:56:41.742699  270048 system_svc.go:56] duration metric: took 16.375182ms WaitForService to wait for kubelet
	I1115 09:56:41.742728  270048 kubeadm.go:587] duration metric: took 8.784736241s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:56:41.742746  270048 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:56:41.746172  270048 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1115 09:56:41.746191  270048 node_conditions.go:123] node cpu capacity is 2
	I1115 09:56:41.746202  270048 node_conditions.go:105] duration metric: took 3.451017ms to run NodePressure ...
	I1115 09:56:41.746212  270048 start.go:242] waiting for startup goroutines ...
	I1115 09:56:41.746218  270048 start.go:247] waiting for cluster config update ...
	I1115 09:56:41.746228  270048 start.go:256] writing updated cluster config ...
	I1115 09:56:41.746513  270048 ssh_runner.go:195] Run: rm -f paused
	I1115 09:56:41.751616  270048 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:56:41.752092  270048 kapi.go:59] client config for test-preload-759272: &rest.Config{Host:"https://192.168.39.153:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/profiles/test-preload-759272/client.key", CAFile:"/home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:56:41.755168  270048 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-qn5cx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:41.760076  270048 pod_ready.go:94] pod "coredns-668d6bf9bc-qn5cx" is "Ready"
	I1115 09:56:41.760101  270048 pod_ready.go:86] duration metric: took 4.908514ms for pod "coredns-668d6bf9bc-qn5cx" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:41.762134  270048 pod_ready.go:83] waiting for pod "etcd-test-preload-759272" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 09:56:43.768570  270048 pod_ready.go:104] pod "etcd-test-preload-759272" is not "Ready", error: <nil>
	I1115 09:56:45.272114  270048 pod_ready.go:94] pod "etcd-test-preload-759272" is "Ready"
	I1115 09:56:45.272156  270048 pod_ready.go:86] duration metric: took 3.510002227s for pod "etcd-test-preload-759272" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:45.276003  270048 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-759272" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:45.280090  270048 pod_ready.go:94] pod "kube-apiserver-test-preload-759272" is "Ready"
	I1115 09:56:45.280117  270048 pod_ready.go:86] duration metric: took 4.088715ms for pod "kube-apiserver-test-preload-759272" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:45.282321  270048 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-759272" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:45.291877  270048 pod_ready.go:94] pod "kube-controller-manager-test-preload-759272" is "Ready"
	I1115 09:56:45.291899  270048 pod_ready.go:86] duration metric: took 9.553745ms for pod "kube-controller-manager-test-preload-759272" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:45.293836  270048 pod_ready.go:83] waiting for pod "kube-proxy-d9gcp" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:45.555429  270048 pod_ready.go:94] pod "kube-proxy-d9gcp" is "Ready"
	I1115 09:56:45.555476  270048 pod_ready.go:86] duration metric: took 261.613886ms for pod "kube-proxy-d9gcp" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:45.756237  270048 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-759272" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:46.156026  270048 pod_ready.go:94] pod "kube-scheduler-test-preload-759272" is "Ready"
	I1115 09:56:46.156055  270048 pod_ready.go:86] duration metric: took 399.789877ms for pod "kube-scheduler-test-preload-759272" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:56:46.156070  270048 pod_ready.go:40] duration metric: took 4.404427616s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:56:46.199416  270048 start.go:628] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1115 09:56:46.201211  270048 out.go:203] 
	W1115 09:56:46.202380  270048 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1115 09:56:46.203474  270048 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1115 09:56:46.204728  270048 out.go:179] * Done! kubectl is now configured to use "test-preload-759272" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:56:46 test-preload-759272 crio[837]: time="2025-11-15 09:56:46.988864979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763200606988791675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79b00967-4088-4418-b615-4ae357720eb7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:56:46 test-preload-759272 crio[837]: time="2025-11-15 09:56:46.989599279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f70c30c9-4e94-4316-b182-debd95d95766 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:46 test-preload-759272 crio[837]: time="2025-11-15 09:56:46.989657704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f70c30c9-4e94-4316-b182-debd95d95766 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:46 test-preload-759272 crio[837]: time="2025-11-15 09:56:46.989859506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c96a5fdcae4adbed7ca5255cfa0f029637517e9507621f8b6e9a6e825d76a2a,PodSandboxId:179c62e1f49ca9c3d1ec46e62c9d9ed41871af76d3c44813ebb3a3ca5b99a85d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763200599505656841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qn5cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 737b6b5f-83eb-4d41-9c7b-5bac63e48416,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:663f6fac3d5b06fd9ffc24400e4508d775e1022e4ea0df5c8a62acade5615982,PodSandboxId:16648018fde8c1132e10b1206b65c124a5b1da09847724bce348e123f02065aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763200592572736714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c8d4315400a7039dad4162bbb3eb46d914ae30091d2942a63b76f9dc9de960,PodSandboxId:a20799fb8fc18b9f91072e1e902caea4894f87e32e1f40020045a41e56a3125b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763200591893966548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d9gcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06
0f71f3-2799-497d-8285-2937897233ec,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b563b5616ce9c927fe7d7a85e6b4a2521cf1edabe6b457c6a8680d89f94b4f7f,PodSandboxId:16648018fde8c1132e10b1206b65c124a5b1da09847724bce348e123f02065aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763200591873897874,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bc2d7f-b58c-4
3dd-a19d-b2c5d2bdfae4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee42f0d97510f350323985fee0c324219e3f179b4ee62ce9ac406006e215f0c,PodSandboxId:3f0204155601c4c1a4c41ab02e355ab536f66532480c38716b6d2aa284850f5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763200588685593314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c6092ecfed5abe73de25
8a48aeef28,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79f8fe5566d67d5bbe2a753440a896fd15be9307e7c827a8b9d01467a5ffffef,PodSandboxId:49e051593ffa398037809273950f5a9a31a07275f815cdb8b558a33853ec9dbf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763200588660167478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb12e
b12448eecbf792b55a376289c5,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b6b018582ab1271eb6667b6577bd0c668cd39e4ea30841fcd4f3ebb70f3cf9,PodSandboxId:9bd7ce580a89b6150eaa487bc0426364195bdbaa2b5b5ec27d4992294bbb6e51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763200588643447442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf251551d1c8b488f29dc56b6e5bb0e,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869393770f3e2e9d92beecac25e2d0b49709a0e96059ba4b577ab993b9b0a89,PodSandboxId:6878616111cdaf9f02f8146b1dafeaa5bd160b922a3a7337990b7d96690a65c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763200588606677864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b19f5108584900607d64303eeed69b69,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f70c30c9-4e94-4316-b182-debd95d95766 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.031128469Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a29ccbf5-2ec0-4895-997e-c6deac6507fa name=/runtime.v1.RuntimeService/Version
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.031274441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a29ccbf5-2ec0-4895-997e-c6deac6507fa name=/runtime.v1.RuntimeService/Version
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.032586962Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a322dfb9-abe3-46aa-b11a-26ecd083221b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.033365756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763200607033327010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a322dfb9-abe3-46aa-b11a-26ecd083221b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.033932764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd0f38b9-3959-47d5-a29f-a5613dcf5321 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.033994392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd0f38b9-3959-47d5-a29f-a5613dcf5321 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.034170975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c96a5fdcae4adbed7ca5255cfa0f029637517e9507621f8b6e9a6e825d76a2a,PodSandboxId:179c62e1f49ca9c3d1ec46e62c9d9ed41871af76d3c44813ebb3a3ca5b99a85d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763200599505656841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qn5cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 737b6b5f-83eb-4d41-9c7b-5bac63e48416,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:663f6fac3d5b06fd9ffc24400e4508d775e1022e4ea0df5c8a62acade5615982,PodSandboxId:16648018fde8c1132e10b1206b65c124a5b1da09847724bce348e123f02065aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763200592572736714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c8d4315400a7039dad4162bbb3eb46d914ae30091d2942a63b76f9dc9de960,PodSandboxId:a20799fb8fc18b9f91072e1e902caea4894f87e32e1f40020045a41e56a3125b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763200591893966548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d9gcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06
0f71f3-2799-497d-8285-2937897233ec,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b563b5616ce9c927fe7d7a85e6b4a2521cf1edabe6b457c6a8680d89f94b4f7f,PodSandboxId:16648018fde8c1132e10b1206b65c124a5b1da09847724bce348e123f02065aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763200591873897874,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bc2d7f-b58c-4
3dd-a19d-b2c5d2bdfae4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee42f0d97510f350323985fee0c324219e3f179b4ee62ce9ac406006e215f0c,PodSandboxId:3f0204155601c4c1a4c41ab02e355ab536f66532480c38716b6d2aa284850f5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763200588685593314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c6092ecfed5abe73de25
8a48aeef28,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79f8fe5566d67d5bbe2a753440a896fd15be9307e7c827a8b9d01467a5ffffef,PodSandboxId:49e051593ffa398037809273950f5a9a31a07275f815cdb8b558a33853ec9dbf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763200588660167478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb12e
b12448eecbf792b55a376289c5,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b6b018582ab1271eb6667b6577bd0c668cd39e4ea30841fcd4f3ebb70f3cf9,PodSandboxId:9bd7ce580a89b6150eaa487bc0426364195bdbaa2b5b5ec27d4992294bbb6e51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763200588643447442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf251551d1c8b488f29dc56b6e5bb0e,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869393770f3e2e9d92beecac25e2d0b49709a0e96059ba4b577ab993b9b0a89,PodSandboxId:6878616111cdaf9f02f8146b1dafeaa5bd160b922a3a7337990b7d96690a65c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763200588606677864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b19f5108584900607d64303eeed69b69,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd0f38b9-3959-47d5-a29f-a5613dcf5321 name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.073732010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4942e916-b8b8-4dd7-a527-fc29a31a4833 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.073812258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4942e916-b8b8-4dd7-a527-fc29a31a4833 name=/runtime.v1.RuntimeService/Version
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.075021109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d336ed1d-b819-48ed-9e16-5caa6f7f5a9d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.075498275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763200607075476850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d336ed1d-b819-48ed-9e16-5caa6f7f5a9d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.076272368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f083e93a-e61a-400c-96d7-07f54f10b98b name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.076360570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f083e93a-e61a-400c-96d7-07f54f10b98b name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.076540840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c96a5fdcae4adbed7ca5255cfa0f029637517e9507621f8b6e9a6e825d76a2a,PodSandboxId:179c62e1f49ca9c3d1ec46e62c9d9ed41871af76d3c44813ebb3a3ca5b99a85d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763200599505656841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qn5cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 737b6b5f-83eb-4d41-9c7b-5bac63e48416,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:663f6fac3d5b06fd9ffc24400e4508d775e1022e4ea0df5c8a62acade5615982,PodSandboxId:16648018fde8c1132e10b1206b65c124a5b1da09847724bce348e123f02065aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763200592572736714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c8d4315400a7039dad4162bbb3eb46d914ae30091d2942a63b76f9dc9de960,PodSandboxId:a20799fb8fc18b9f91072e1e902caea4894f87e32e1f40020045a41e56a3125b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763200591893966548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d9gcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06
0f71f3-2799-497d-8285-2937897233ec,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b563b5616ce9c927fe7d7a85e6b4a2521cf1edabe6b457c6a8680d89f94b4f7f,PodSandboxId:16648018fde8c1132e10b1206b65c124a5b1da09847724bce348e123f02065aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763200591873897874,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bc2d7f-b58c-4
3dd-a19d-b2c5d2bdfae4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee42f0d97510f350323985fee0c324219e3f179b4ee62ce9ac406006e215f0c,PodSandboxId:3f0204155601c4c1a4c41ab02e355ab536f66532480c38716b6d2aa284850f5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763200588685593314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c6092ecfed5abe73de25
8a48aeef28,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79f8fe5566d67d5bbe2a753440a896fd15be9307e7c827a8b9d01467a5ffffef,PodSandboxId:49e051593ffa398037809273950f5a9a31a07275f815cdb8b558a33853ec9dbf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763200588660167478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb12e
b12448eecbf792b55a376289c5,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b6b018582ab1271eb6667b6577bd0c668cd39e4ea30841fcd4f3ebb70f3cf9,PodSandboxId:9bd7ce580a89b6150eaa487bc0426364195bdbaa2b5b5ec27d4992294bbb6e51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763200588643447442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf251551d1c8b488f29dc56b6e5bb0e,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869393770f3e2e9d92beecac25e2d0b49709a0e96059ba4b577ab993b9b0a89,PodSandboxId:6878616111cdaf9f02f8146b1dafeaa5bd160b922a3a7337990b7d96690a65c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763200588606677864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b19f5108584900607d64303eeed69b69,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f083e93a-e61a-400c-96d7-07f54f10b98b name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.111511322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ca792f5-21b7-406a-ac85-7d459919669a name=/runtime.v1.RuntimeService/Version
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.111842427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ca792f5-21b7-406a-ac85-7d459919669a name=/runtime.v1.RuntimeService/Version
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.113662458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68cecb71-45ad-41ce-9f08-8b1ff8c39497 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.114064913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763200607114045216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68cecb71-45ad-41ce-9f08-8b1ff8c39497 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.114753037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b7ac040-66d4-4ced-8317-b3f7b474288a name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.114861336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b7ac040-66d4-4ced-8317-b3f7b474288a name=/runtime.v1.RuntimeService/ListContainers
	Nov 15 09:56:47 test-preload-759272 crio[837]: time="2025-11-15 09:56:47.115069228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c96a5fdcae4adbed7ca5255cfa0f029637517e9507621f8b6e9a6e825d76a2a,PodSandboxId:179c62e1f49ca9c3d1ec46e62c9d9ed41871af76d3c44813ebb3a3ca5b99a85d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763200599505656841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qn5cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 737b6b5f-83eb-4d41-9c7b-5bac63e48416,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:663f6fac3d5b06fd9ffc24400e4508d775e1022e4ea0df5c8a62acade5615982,PodSandboxId:16648018fde8c1132e10b1206b65c124a5b1da09847724bce348e123f02065aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763200592572736714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c8d4315400a7039dad4162bbb3eb46d914ae30091d2942a63b76f9dc9de960,PodSandboxId:a20799fb8fc18b9f91072e1e902caea4894f87e32e1f40020045a41e56a3125b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763200591893966548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d9gcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06
0f71f3-2799-497d-8285-2937897233ec,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b563b5616ce9c927fe7d7a85e6b4a2521cf1edabe6b457c6a8680d89f94b4f7f,PodSandboxId:16648018fde8c1132e10b1206b65c124a5b1da09847724bce348e123f02065aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763200591873897874,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53bc2d7f-b58c-4
3dd-a19d-b2c5d2bdfae4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee42f0d97510f350323985fee0c324219e3f179b4ee62ce9ac406006e215f0c,PodSandboxId:3f0204155601c4c1a4c41ab02e355ab536f66532480c38716b6d2aa284850f5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763200588685593314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c6092ecfed5abe73de25
8a48aeef28,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79f8fe5566d67d5bbe2a753440a896fd15be9307e7c827a8b9d01467a5ffffef,PodSandboxId:49e051593ffa398037809273950f5a9a31a07275f815cdb8b558a33853ec9dbf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763200588660167478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb12e
b12448eecbf792b55a376289c5,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b6b018582ab1271eb6667b6577bd0c668cd39e4ea30841fcd4f3ebb70f3cf9,PodSandboxId:9bd7ce580a89b6150eaa487bc0426364195bdbaa2b5b5ec27d4992294bbb6e51,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763200588643447442,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf251551d1c8b488f29dc56b6e5bb0e,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869393770f3e2e9d92beecac25e2d0b49709a0e96059ba4b577ab993b9b0a89,PodSandboxId:6878616111cdaf9f02f8146b1dafeaa5bd160b922a3a7337990b7d96690a65c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763200588606677864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-759272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b19f5108584900607d64303eeed69b69,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b7ac040-66d4-4ced-8317-b3f7b474288a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9c96a5fdcae4a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 seconds ago       Running             coredns                   1                   179c62e1f49ca       coredns-668d6bf9bc-qn5cx
	663f6fac3d5b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   16648018fde8c       storage-provisioner
	12c8d4315400a       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   a20799fb8fc18       kube-proxy-d9gcp
	b563b5616ce9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       1                   16648018fde8c       storage-provisioner
	8ee42f0d97510       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   3f0204155601c       kube-scheduler-test-preload-759272
	79f8fe5566d67       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   49e051593ffa3       kube-controller-manager-test-preload-759272
	12b6b018582ab       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   9bd7ce580a89b       etcd-test-preload-759272
	c869393770f3e       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   6878616111cda       kube-apiserver-test-preload-759272
	
	
	==> coredns [9c96a5fdcae4adbed7ca5255cfa0f029637517e9507621f8b6e9a6e825d76a2a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41625 - 18253 "HINFO IN 5483347094889468050.2829537321545509131. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021669843s
	
	
	==> describe nodes <==
	Name:               test-preload-759272
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-759272
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=test-preload-759272
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_55_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:55:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-759272
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:56:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:56:41 +0000   Sat, 15 Nov 2025 09:55:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:56:41 +0000   Sat, 15 Nov 2025 09:55:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:56:41 +0000   Sat, 15 Nov 2025 09:55:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:56:41 +0000   Sat, 15 Nov 2025 09:56:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.153
	  Hostname:    test-preload-759272
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 3af07a219ebd4583be2b10ef27d9a4ae
	  System UUID:                3af07a21-9ebd-4583-be2b-10ef27d9a4ae
	  Boot ID:                    7133ad43-2d97-435d-b50e-01e6bf2ce6ab
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-qn5cx                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     78s
	  kube-system                 etcd-test-preload-759272                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         83s
	  kube-system                 kube-apiserver-test-preload-759272             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-test-preload-759272    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-d9gcp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-test-preload-759272             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node test-preload-759272 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node test-preload-759272 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     89s (x7 over 89s)  kubelet          Node test-preload-759272 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     83s                kubelet          Node test-preload-759272 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node test-preload-759272 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node test-preload-759272 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                83s                kubelet          Node test-preload-759272 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           79s                node-controller  Node test-preload-759272 event: Registered Node test-preload-759272 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-759272 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-759272 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-759272 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-759272 has been rebooted, boot id: 7133ad43-2d97-435d-b50e-01e6bf2ce6ab
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-759272 event: Registered Node test-preload-759272 in Controller
	
	
	==> dmesg <==
	[Nov15 09:56] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001315] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005565] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.008124] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.111011] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.096468] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.496522] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000089] kauditd_printk_skb: 143 callbacks suppressed
	[  +0.021977] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [12b6b018582ab1271eb6667b6577bd0c668cd39e4ea30841fcd4f3ebb70f3cf9] <==
	{"level":"info","ts":"2025-11-15T09:56:29.096803Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-15T09:56:29.099545Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"7153d03280366cb8","initial-advertise-peer-urls":["https://192.168.39.153:2380"],"listen-peer-urls":["https://192.168.39.153:2380"],"advertise-client-urls":["https://192.168.39.153:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.153:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T09:56:29.101287Z","caller":"etcdserver/server.go:757","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"7153d03280366cb8","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-15T09:56:29.101283Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T09:56:29.101455Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T09:56:29.101509Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T09:56:29.101513Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.153:2380"}
	{"level":"info","ts":"2025-11-15T09:56:29.101525Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.153:2380"}
	{"level":"info","ts":"2025-11-15T09:56:29.101516Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T09:56:29.261306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7153d03280366cb8 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-15T09:56:29.261364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7153d03280366cb8 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-15T09:56:29.261394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7153d03280366cb8 received MsgPreVoteResp from 7153d03280366cb8 at term 2"}
	{"level":"info","ts":"2025-11-15T09:56:29.261412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7153d03280366cb8 became candidate at term 3"}
	{"level":"info","ts":"2025-11-15T09:56:29.261425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7153d03280366cb8 received MsgVoteResp from 7153d03280366cb8 at term 3"}
	{"level":"info","ts":"2025-11-15T09:56:29.261434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7153d03280366cb8 became leader at term 3"}
	{"level":"info","ts":"2025-11-15T09:56:29.261448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7153d03280366cb8 elected leader 7153d03280366cb8 at term 3"}
	{"level":"info","ts":"2025-11-15T09:56:29.264531Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7153d03280366cb8","local-member-attributes":"{Name:test-preload-759272 ClientURLs:[https://192.168.39.153:2379]}","request-path":"/0/members/7153d03280366cb8/attributes","cluster-id":"5a9667b9ae591d0","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T09:56:29.264611Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T09:56:29.264623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T09:56:29.265097Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T09:56:29.266179Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T09:56:29.265575Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T09:56:29.266607Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T09:56:29.266821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.153:2379"}
	{"level":"info","ts":"2025-11-15T09:56:29.267122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:56:47 up 0 min,  0 users,  load average: 0.64, 0.17, 0.06
	Linux test-preload-759272 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [c869393770f3e2e9d92beecac25e2d0b49709a0e96059ba4b577ab993b9b0a89] <==
	I1115 09:56:31.097746       1 policy_source.go:240] refreshing policies
	I1115 09:56:31.099406       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 09:56:31.099606       1 shared_informer.go:320] Caches are synced for configmaps
	I1115 09:56:31.103752       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1115 09:56:31.105556       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 09:56:31.110001       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 09:56:31.110031       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 09:56:31.110122       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 09:56:31.110356       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 09:56:31.117434       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:56:31.137357       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1115 09:56:31.137453       1 aggregator.go:171] initial CRD sync complete...
	I1115 09:56:31.137461       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 09:56:31.137466       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 09:56:31.137470       1 cache.go:39] Caches are synced for autoregister controller
	E1115 09:56:31.152705       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 09:56:31.498488       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1115 09:56:32.009531       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:56:32.743715       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1115 09:56:32.799769       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1115 09:56:32.830747       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:56:32.836674       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:56:34.603733       1 controller.go:615] quota admission added evaluator for: endpoints
	I1115 09:56:34.706508       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:56:34.757804       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [79f8fe5566d67d5bbe2a753440a896fd15be9307e7c827a8b9d01467a5ffffef] <==
	I1115 09:56:34.304086       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1115 09:56:34.305067       1 shared_informer.go:320] Caches are synced for resource quota
	I1115 09:56:34.308399       1 shared_informer.go:320] Caches are synced for namespace
	I1115 09:56:34.325730       1 shared_informer.go:320] Caches are synced for garbage collector
	I1115 09:56:34.325784       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 09:56:34.325791       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 09:56:34.331094       1 shared_informer.go:320] Caches are synced for cronjob
	I1115 09:56:34.335112       1 shared_informer.go:320] Caches are synced for garbage collector
	I1115 09:56:34.342333       1 shared_informer.go:320] Caches are synced for taint
	I1115 09:56:34.342423       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 09:56:34.342500       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-759272"
	I1115 09:56:34.342586       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 09:56:34.346247       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1115 09:56:34.350514       1 shared_informer.go:320] Caches are synced for attach detach
	I1115 09:56:34.352707       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1115 09:56:34.360711       1 shared_informer.go:320] Caches are synced for TTL
	I1115 09:56:34.360830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-759272"
	I1115 09:56:34.764656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="463.513525ms"
	I1115 09:56:34.765406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="121.546µs"
	I1115 09:56:39.615729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.696µs"
	I1115 09:56:40.635039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.173381ms"
	I1115 09:56:40.636680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="146.155µs"
	I1115 09:56:41.541513       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-759272"
	I1115 09:56:41.561630       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-759272"
	I1115 09:56:44.344666       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [12c8d4315400a7039dad4162bbb3eb46d914ae30091d2942a63b76f9dc9de960] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1115 09:56:32.163372       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1115 09:56:32.177879       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.153"]
	E1115 09:56:32.178162       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:56:32.266639       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1115 09:56:32.266726       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1115 09:56:32.266761       1 server_linux.go:170] "Using iptables Proxier"
	I1115 09:56:32.270033       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:56:32.270793       1 server.go:497] "Version info" version="v1.32.0"
	I1115 09:56:32.270989       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:56:32.274330       1 config.go:199] "Starting service config controller"
	I1115 09:56:32.274518       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1115 09:56:32.274569       1 config.go:105] "Starting endpoint slice config controller"
	I1115 09:56:32.274587       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1115 09:56:32.275174       1 config.go:329] "Starting node config controller"
	I1115 09:56:32.275267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1115 09:56:32.375329       1 shared_informer.go:320] Caches are synced for node config
	I1115 09:56:32.375409       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1115 09:56:32.375476       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8ee42f0d97510f350323985fee0c324219e3f179b4ee62ce9ac406006e215f0c] <==
	I1115 09:56:29.345400       1 serving.go:386] Generated self-signed cert in-memory
	W1115 09:56:31.060639       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 09:56:31.062274       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 09:56:31.062372       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 09:56:31.062401       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 09:56:31.123810       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1115 09:56:31.123849       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:56:31.126197       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:56:31.126259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1115 09:56:31.126282       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 09:56:31.126843       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 09:56:31.228091       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: E1115 09:56:31.205067    1159 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-759272\" already exists" pod="kube-system/kube-apiserver-test-preload-759272"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: I1115 09:56:31.205368    1159 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-759272"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: E1115 09:56:31.213482    1159 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-759272\" already exists" pod="kube-system/kube-controller-manager-test-preload-759272"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: I1115 09:56:31.431032    1159 apiserver.go:52] "Watching apiserver"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: E1115 09:56:31.435605    1159 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qn5cx" podUID="737b6b5f-83eb-4d41-9c7b-5bac63e48416"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: I1115 09:56:31.448036    1159 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: I1115 09:56:31.494083    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/060f71f3-2799-497d-8285-2937897233ec-xtables-lock\") pod \"kube-proxy-d9gcp\" (UID: \"060f71f3-2799-497d-8285-2937897233ec\") " pod="kube-system/kube-proxy-d9gcp"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: I1115 09:56:31.494187    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/060f71f3-2799-497d-8285-2937897233ec-lib-modules\") pod \"kube-proxy-d9gcp\" (UID: \"060f71f3-2799-497d-8285-2937897233ec\") " pod="kube-system/kube-proxy-d9gcp"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: I1115 09:56:31.494264    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4-tmp\") pod \"storage-provisioner\" (UID: \"53bc2d7f-b58c-43dd-a19d-b2c5d2bdfae4\") " pod="kube-system/storage-provisioner"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: E1115 09:56:31.494527    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: E1115 09:56:31.494797    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737b6b5f-83eb-4d41-9c7b-5bac63e48416-config-volume podName:737b6b5f-83eb-4d41-9c7b-5bac63e48416 nodeName:}" failed. No retries permitted until 2025-11-15 09:56:31.994719639 +0000 UTC m=+5.664409351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/737b6b5f-83eb-4d41-9c7b-5bac63e48416-config-volume") pod "coredns-668d6bf9bc-qn5cx" (UID: "737b6b5f-83eb-4d41-9c7b-5bac63e48416") : object "kube-system"/"coredns" not registered
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: E1115 09:56:31.516591    1159 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: E1115 09:56:31.998539    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 15 09:56:31 test-preload-759272 kubelet[1159]: E1115 09:56:31.998615    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737b6b5f-83eb-4d41-9c7b-5bac63e48416-config-volume podName:737b6b5f-83eb-4d41-9c7b-5bac63e48416 nodeName:}" failed. No retries permitted until 2025-11-15 09:56:32.998601751 +0000 UTC m=+6.668291465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/737b6b5f-83eb-4d41-9c7b-5bac63e48416-config-volume") pod "coredns-668d6bf9bc-qn5cx" (UID: "737b6b5f-83eb-4d41-9c7b-5bac63e48416") : object "kube-system"/"coredns" not registered
	Nov 15 09:56:32 test-preload-759272 kubelet[1159]: I1115 09:56:32.551935    1159 scope.go:117] "RemoveContainer" containerID="b563b5616ce9c927fe7d7a85e6b4a2521cf1edabe6b457c6a8680d89f94b4f7f"
	Nov 15 09:56:33 test-preload-759272 kubelet[1159]: E1115 09:56:33.005818    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 15 09:56:33 test-preload-759272 kubelet[1159]: E1115 09:56:33.005895    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737b6b5f-83eb-4d41-9c7b-5bac63e48416-config-volume podName:737b6b5f-83eb-4d41-9c7b-5bac63e48416 nodeName:}" failed. No retries permitted until 2025-11-15 09:56:35.005880522 +0000 UTC m=+8.675570233 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/737b6b5f-83eb-4d41-9c7b-5bac63e48416-config-volume") pod "coredns-668d6bf9bc-qn5cx" (UID: "737b6b5f-83eb-4d41-9c7b-5bac63e48416") : object "kube-system"/"coredns" not registered
	Nov 15 09:56:33 test-preload-759272 kubelet[1159]: E1115 09:56:33.479495    1159 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qn5cx" podUID="737b6b5f-83eb-4d41-9c7b-5bac63e48416"
	Nov 15 09:56:35 test-preload-759272 kubelet[1159]: E1115 09:56:35.019980    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 15 09:56:35 test-preload-759272 kubelet[1159]: E1115 09:56:35.020073    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737b6b5f-83eb-4d41-9c7b-5bac63e48416-config-volume podName:737b6b5f-83eb-4d41-9c7b-5bac63e48416 nodeName:}" failed. No retries permitted until 2025-11-15 09:56:39.020058585 +0000 UTC m=+12.689748309 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/737b6b5f-83eb-4d41-9c7b-5bac63e48416-config-volume") pod "coredns-668d6bf9bc-qn5cx" (UID: "737b6b5f-83eb-4d41-9c7b-5bac63e48416") : object "kube-system"/"coredns" not registered
	Nov 15 09:56:35 test-preload-759272 kubelet[1159]: E1115 09:56:35.480008    1159 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qn5cx" podUID="737b6b5f-83eb-4d41-9c7b-5bac63e48416"
	Nov 15 09:56:36 test-preload-759272 kubelet[1159]: E1115 09:56:36.520302    1159 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763200596516651356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 15 09:56:36 test-preload-759272 kubelet[1159]: E1115 09:56:36.520329    1159 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763200596516651356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 15 09:56:46 test-preload-759272 kubelet[1159]: E1115 09:56:46.522372    1159 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763200606522019873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 15 09:56:46 test-preload-759272 kubelet[1159]: E1115 09:56:46.522398    1159 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763200606522019873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [663f6fac3d5b06fd9ffc24400e4508d775e1022e4ea0df5c8a62acade5615982] <==
	I1115 09:56:32.685482       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 09:56:32.697934       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 09:56:32.698069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [b563b5616ce9c927fe7d7a85e6b4a2521cf1edabe6b457c6a8680d89f94b4f7f] <==
	I1115 09:56:31.964336       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 09:56:31.967331       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-759272 -n test-preload-759272
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-759272 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-759272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-759272
--- FAIL: TestPreload (133.15s)

                                                
                                    

Test pass (309/351)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.25
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.34
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 1.22
22 TestOffline 93.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 128.31
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 11.53
35 TestAddons/parallel/Registry 16.53
36 TestAddons/parallel/RegistryCreds 0.95
38 TestAddons/parallel/InspektorGadget 12.14
39 TestAddons/parallel/MetricsServer 6.32
41 TestAddons/parallel/CSI 58.98
42 TestAddons/parallel/Headlamp 20.61
43 TestAddons/parallel/CloudSpanner 5.73
44 TestAddons/parallel/LocalPath 14.63
45 TestAddons/parallel/NvidiaDevicePlugin 6.92
46 TestAddons/parallel/Yakd 11.94
48 TestAddons/StoppedEnableDisable 77.1
49 TestCertOptions 79.99
50 TestCertExpiration 298.28
52 TestForceSystemdFlag 40.84
53 TestForceSystemdEnv 58.46
58 TestErrorSpam/setup 36.63
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.69
61 TestErrorSpam/pause 1.58
62 TestErrorSpam/unpause 1.76
63 TestErrorSpam/stop 5.85
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 58.45
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 53.22
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.13
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.83
75 TestFunctional/serial/CacheCmd/cache/add_local 2.61
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 34.82
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.4
86 TestFunctional/serial/LogsFileCmd 1.36
87 TestFunctional/serial/InvalidService 4.31
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 13.55
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.69
97 TestFunctional/parallel/ServiceCmdConnect 17.36
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 45.26
101 TestFunctional/parallel/SSHCmd 0.33
102 TestFunctional/parallel/CpCmd 1.11
103 TestFunctional/parallel/MySQL 24.66
104 TestFunctional/parallel/FileSync 0.18
105 TestFunctional/parallel/CertSync 1.11
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.33
113 TestFunctional/parallel/License 0.89
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
121 TestFunctional/parallel/ImageCommands/ImageBuild 5.04
122 TestFunctional/parallel/ImageCommands/Setup 1.74
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.68
135 TestFunctional/parallel/MountCmd/any-port 18.26
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.08
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.14
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.87
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
142 TestFunctional/parallel/MountCmd/specific-port 1.57
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
144 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
146 TestFunctional/parallel/ProfileCmd/profile_list 0.31
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
148 TestFunctional/parallel/ServiceCmd/List 1.27
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.39
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
151 TestFunctional/parallel/ServiceCmd/Format 0.32
152 TestFunctional/parallel/ServiceCmd/URL 0.28
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 205.04
161 TestMultiControlPlane/serial/DeployApp 7.42
162 TestMultiControlPlane/serial/PingHostFromPods 1.29
163 TestMultiControlPlane/serial/AddWorkerNode 44.81
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
166 TestMultiControlPlane/serial/CopyFile 10.68
167 TestMultiControlPlane/serial/StopSecondaryNode 82.8
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
169 TestMultiControlPlane/serial/RestartSecondaryNode 34.54
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 364.56
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.59
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
174 TestMultiControlPlane/serial/StopCluster 260.16
175 TestMultiControlPlane/serial/RestartCluster 90.69
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
177 TestMultiControlPlane/serial/AddSecondaryNode 79.18
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.69
183 TestJSONOutput/start/Command 54.47
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.7
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.64
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.88
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 76.41
215 TestMountStart/serial/StartWithMountFirst 20.39
216 TestMountStart/serial/VerifyMountFirst 0.31
217 TestMountStart/serial/StartWithMountSecond 19.26
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.69
220 TestMountStart/serial/VerifyMountPostDelete 0.3
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 18.72
223 TestMountStart/serial/VerifyMountPostStop 0.32
226 TestMultiNode/serial/FreshStart2Nodes 99.68
227 TestMultiNode/serial/DeployApp2Nodes 7.17
228 TestMultiNode/serial/PingHostFrom2Pods 0.85
229 TestMultiNode/serial/AddNode 40.49
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.46
232 TestMultiNode/serial/CopyFile 6
233 TestMultiNode/serial/StopNode 2.18
234 TestMultiNode/serial/StartAfterStop 41.3
235 TestMultiNode/serial/RestartKeepsNodes 306.55
236 TestMultiNode/serial/DeleteNode 2.65
237 TestMultiNode/serial/StopMultiNode 172.37
238 TestMultiNode/serial/RestartMultiNode 83.17
239 TestMultiNode/serial/ValidateNameConflict 38.83
246 TestScheduledStopUnix 108.48
250 TestRunningBinaryUpgrade 125.36
252 TestKubernetesUpgrade 144.01
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 95.35
264 TestNetworkPlugins/group/false 4.55
268 TestISOImage/Setup 43.27
269 TestNoKubernetes/serial/StartWithStopK8s 49.54
271 TestISOImage/Binaries/crictl 0.2
272 TestISOImage/Binaries/curl 0.21
273 TestISOImage/Binaries/docker 0.21
274 TestISOImage/Binaries/git 0.2
275 TestISOImage/Binaries/iptables 0.22
276 TestISOImage/Binaries/podman 0.19
277 TestISOImage/Binaries/rsync 0.21
278 TestISOImage/Binaries/socat 0.2
279 TestISOImage/Binaries/wget 0.2
280 TestISOImage/Binaries/VBoxControl 0.19
281 TestISOImage/Binaries/VBoxService 0.3
282 TestNoKubernetes/serial/Start 40.75
283 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
285 TestNoKubernetes/serial/ProfileList 1.15
286 TestNoKubernetes/serial/Stop 1.27
287 TestNoKubernetes/serial/StartNoArgs 53.12
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
289 TestStoppedBinaryUpgrade/Setup 2.97
290 TestStoppedBinaryUpgrade/Upgrade 106.52
299 TestPause/serial/Start 60.71
300 TestNetworkPlugins/group/auto/Start 71.29
301 TestPause/serial/SecondStartNoReconfiguration 52.07
302 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
303 TestNetworkPlugins/group/kindnet/Start 61.65
304 TestNetworkPlugins/group/auto/KubeletFlags 0.21
305 TestNetworkPlugins/group/auto/NetCatPod 11.25
306 TestNetworkPlugins/group/calico/Start 104.96
307 TestNetworkPlugins/group/auto/DNS 0.18
308 TestNetworkPlugins/group/auto/Localhost 0.13
309 TestNetworkPlugins/group/auto/HairPin 0.13
310 TestPause/serial/Pause 0.93
311 TestPause/serial/VerifyStatus 0.28
312 TestPause/serial/Unpause 0.97
313 TestPause/serial/PauseAgain 1.19
314 TestPause/serial/DeletePaused 1.01
315 TestPause/serial/VerifyDeletedResources 0.74
316 TestNetworkPlugins/group/custom-flannel/Start 89.11
317 TestNetworkPlugins/group/enable-default-cni/Start 87.68
318 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
320 TestNetworkPlugins/group/kindnet/NetCatPod 13.73
321 TestNetworkPlugins/group/kindnet/DNS 0.19
322 TestNetworkPlugins/group/kindnet/Localhost 0.15
323 TestNetworkPlugins/group/kindnet/HairPin 0.16
324 TestNetworkPlugins/group/flannel/Start 70.24
325 TestNetworkPlugins/group/calico/ControllerPod 6.01
326 TestNetworkPlugins/group/calico/KubeletFlags 0.18
327 TestNetworkPlugins/group/calico/NetCatPod 11.25
328 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
329 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.49
330 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
331 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
332 TestNetworkPlugins/group/calico/DNS 0.17
333 TestNetworkPlugins/group/calico/Localhost 0.15
334 TestNetworkPlugins/group/calico/HairPin 0.13
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.36
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
338 TestNetworkPlugins/group/custom-flannel/DNS 0.16
339 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
340 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
341 TestNetworkPlugins/group/bridge/Start 59.08
343 TestStartStop/group/old-k8s-version/serial/FirstStart 78.36
345 TestStartStop/group/no-preload/serial/FirstStart 118.2
346 TestNetworkPlugins/group/flannel/ControllerPod 6.01
347 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
348 TestNetworkPlugins/group/flannel/NetCatPod 11.3
349 TestNetworkPlugins/group/flannel/DNS 0.17
350 TestNetworkPlugins/group/flannel/Localhost 0.14
351 TestNetworkPlugins/group/flannel/HairPin 0.14
352 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
353 TestNetworkPlugins/group/bridge/NetCatPod 11.28
355 TestStartStop/group/embed-certs/serial/FirstStart 59.01
356 TestNetworkPlugins/group/bridge/DNS 0.18
357 TestNetworkPlugins/group/bridge/Localhost 0.14
358 TestNetworkPlugins/group/bridge/HairPin 0.15
359 TestStartStop/group/old-k8s-version/serial/DeployApp 11.34
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.85
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.25
363 TestStartStop/group/old-k8s-version/serial/Stop 86.71
364 TestStartStop/group/embed-certs/serial/DeployApp 10.28
365 TestStartStop/group/no-preload/serial/DeployApp 10.28
366 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
367 TestStartStop/group/embed-certs/serial/Stop 82.74
368 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
369 TestStartStop/group/no-preload/serial/Stop 83.62
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
372 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.64
373 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
374 TestStartStop/group/old-k8s-version/serial/SecondStart 44.68
375 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
376 TestStartStop/group/embed-certs/serial/SecondStart 44.79
377 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
378 TestStartStop/group/no-preload/serial/SecondStart 76.2
379 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
380 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
381 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
382 TestStartStop/group/old-k8s-version/serial/Pause 3.26
383 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
384 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.06
386 TestStartStop/group/newest-cni/serial/FirstStart 67.99
387 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
388 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.19
389 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
390 TestStartStop/group/embed-certs/serial/Pause 3.83
392 TestISOImage/PersistentMounts//data 0.23
393 TestISOImage/PersistentMounts//var/lib/docker 0.22
394 TestISOImage/PersistentMounts//var/lib/cni 0.21
395 TestISOImage/PersistentMounts//var/lib/kubelet 0.23
396 TestISOImage/PersistentMounts//var/lib/minikube 0.23
397 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
398 TestISOImage/PersistentMounts//var/lib/boot2docker 0.22
399 TestISOImage/VersionJSON 0.2
400 TestISOImage/eBPFSupport 0.21
401 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
402 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
403 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
404 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
405 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
406 TestStartStop/group/no-preload/serial/Pause 2.99
407 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
408 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.26
409 TestStartStop/group/newest-cni/serial/DeployApp 0
410 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
411 TestStartStop/group/newest-cni/serial/Stop 12.73
412 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
413 TestStartStop/group/newest-cni/serial/SecondStart 33.77
414 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
417 TestStartStop/group/newest-cni/serial/Pause 3.27
x
+
TestDownloadOnly/v1.28.0/json-events (23.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-855369 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-855369 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.251939197s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1115 09:06:27.760284  247445 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1115 09:06:27.760377  247445 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-855369
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-855369: exit status 85 (72.531378ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-855369 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-855369 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:06:04
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:06:04.560811  247457 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:06:04.560926  247457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:06:04.560935  247457 out.go:374] Setting ErrFile to fd 2...
	I1115 09:06:04.560940  247457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:06:04.561130  247457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	W1115 09:06:04.561235  247457 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21895-243545/.minikube/config/config.json: open /home/jenkins/minikube-integration/21895-243545/.minikube/config/config.json: no such file or directory
	I1115 09:06:04.561694  247457 out.go:368] Setting JSON to true
	I1115 09:06:04.562614  247457 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6507,"bootTime":1763191058,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:06:04.562714  247457 start.go:143] virtualization: kvm guest
	I1115 09:06:04.565027  247457 out.go:99] [download-only-855369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:06:04.565167  247457 notify.go:221] Checking for updates...
	W1115 09:06:04.565189  247457 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball: no such file or directory
	I1115 09:06:04.566291  247457 out.go:171] MINIKUBE_LOCATION=21895
	I1115 09:06:04.567553  247457 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:06:04.568658  247457 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:06:04.569866  247457 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:06:04.570937  247457 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:06:04.573378  247457 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:06:04.573654  247457 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:06:04.607704  247457 out.go:99] Using the kvm2 driver based on user configuration
	I1115 09:06:04.607757  247457 start.go:309] selected driver: kvm2
	I1115 09:06:04.607770  247457 start.go:930] validating driver "kvm2" against <nil>
	I1115 09:06:04.608182  247457 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:06:04.608785  247457 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1115 09:06:04.608975  247457 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:06:04.609011  247457 cni.go:84] Creating CNI manager for ""
	I1115 09:06:04.609070  247457 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:06:04.609080  247457 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1115 09:06:04.609134  247457 start.go:353] cluster config:
	{Name:download-only-855369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-855369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:06:04.609372  247457 iso.go:125] acquiring lock: {Name:mkff40ddaa37657d9e8283719561f1fce12069ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:06:04.611042  247457 out.go:99] Downloading VM boot image ...
	I1115 09:06:04.611079  247457 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21895-243545/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1115 09:06:15.145315  247457 out.go:99] Starting "download-only-855369" primary control-plane node in "download-only-855369" cluster
	I1115 09:06:15.145367  247457 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:06:15.241904  247457 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1115 09:06:15.241963  247457 cache.go:65] Caching tarball of preloaded images
	I1115 09:06:15.242181  247457 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:06:15.244193  247457 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1115 09:06:15.244216  247457 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 09:06:15.339908  247457 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1115 09:06:15.340044  247457 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-855369 host does not exist
	  To start a cluster, run: "minikube start -p download-only-855369"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-855369
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-071043 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-071043 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.341936545s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1115 09:06:39.476414  247445 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 09:06:39.476470  247445 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-071043
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-071043: exit status 85 (71.848084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-855369 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-855369 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ 15 Nov 25 09:06 UTC │
	│ delete  │ -p download-only-855369                                                                                                                                                 │ download-only-855369 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ 15 Nov 25 09:06 UTC │
	│ start   │ -o=json --download-only -p download-only-071043 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-071043 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:06:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:06:28.186245  247684 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:06:28.186341  247684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:06:28.186346  247684 out.go:374] Setting ErrFile to fd 2...
	I1115 09:06:28.186350  247684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:06:28.186578  247684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:06:28.187006  247684 out.go:368] Setting JSON to true
	I1115 09:06:28.187878  247684 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6530,"bootTime":1763191058,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:06:28.187980  247684 start.go:143] virtualization: kvm guest
	I1115 09:06:28.189856  247684 out.go:99] [download-only-071043] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:06:28.190004  247684 notify.go:221] Checking for updates...
	I1115 09:06:28.191054  247684 out.go:171] MINIKUBE_LOCATION=21895
	I1115 09:06:28.192117  247684 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:06:28.193226  247684 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:06:28.194279  247684 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:06:28.195396  247684 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:06:28.197355  247684 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:06:28.197640  247684 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:06:28.227655  247684 out.go:99] Using the kvm2 driver based on user configuration
	I1115 09:06:28.227690  247684 start.go:309] selected driver: kvm2
	I1115 09:06:28.227696  247684 start.go:930] validating driver "kvm2" against <nil>
	I1115 09:06:28.228017  247684 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:06:28.228531  247684 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1115 09:06:28.228725  247684 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:06:28.228762  247684 cni.go:84] Creating CNI manager for ""
	I1115 09:06:28.228816  247684 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1115 09:06:28.228827  247684 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1115 09:06:28.228875  247684 start.go:353] cluster config:
	{Name:download-only-071043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-071043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:06:28.229016  247684 iso.go:125] acquiring lock: {Name:mkff40ddaa37657d9e8283719561f1fce12069ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:06:28.230125  247684 out.go:99] Starting "download-only-071043" primary control-plane node in "download-only-071043" cluster
	I1115 09:06:28.230142  247684 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:06:28.774590  247684 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:06:28.774655  247684 cache.go:65] Caching tarball of preloaded images
	I1115 09:06:28.774874  247684 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:06:28.776679  247684 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1115 09:06:28.776714  247684 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 09:06:28.874678  247684 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1115 09:06:28.874731  247684 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-071043 host does not exist
	  To start a cluster, run: "minikube start -p download-only-071043"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-071043
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (1.22s)

                                                
                                                
=== RUN   TestBinaryMirror
I1115 09:06:40.204718  247445 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-042783 --alsologtostderr --binary-mirror http://127.0.0.1:43911 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-042783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-042783
--- PASS: TestBinaryMirror (1.22s)

                                                
                                    
x
+
TestOffline (93.34s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-379010 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-379010 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.490049222s)
helpers_test.go:175: Cleaning up "offline-crio-379010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-379010
--- PASS: TestOffline (93.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-663794
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-663794: exit status 85 (69.127834ms)

                                                
                                                
-- stdout --
	* Profile "addons-663794" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663794"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-663794
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-663794: exit status 85 (68.449091ms)

                                                
                                                
-- stdout --
	* Profile "addons-663794" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663794"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (128.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-663794 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-663794 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.308092069s)
--- PASS: TestAddons/Setup (128.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-663794 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-663794 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-663794 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-663794 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [011de042-57a4-4b3e-bb73-a8fb6b5af30b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [011de042-57a4-4b3e-bb73-a8fb6b5af30b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.0039902s
addons_test.go:694: (dbg) Run:  kubectl --context addons-663794 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-663794 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-663794 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 12.750433ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-hgvh6" [76662db3-ff4c-4ca1-8587-5d8f12c77a66] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007762305s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9tkz8" [527b58a0-a1f0-4419-ac42-b4de22cf8ccb] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004616305s
addons_test.go:392: (dbg) Run:  kubectl --context addons-663794 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-663794 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-663794 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.762046934s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 ip
2025/11/15 09:09:25 [DEBUG] GET http://192.168.39.78:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.95s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 8.979254ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-663794
addons_test.go:332: (dbg) Run:  kubectl --context addons-663794 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.95s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-4jp9g" [e232a6b7-fd53-48f3-bcff-8546e6e6de95] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.011335043s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable inspektor-gadget --alsologtostderr -v=1: (6.1261834s)
--- PASS: TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 11.46892ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-z4cnh" [a5aaf6d1-1d0d-439f-a5f1-50cd9a24a185] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009126011s
addons_test.go:463: (dbg) Run:  kubectl --context addons-663794 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable metrics-server --alsologtostderr -v=1: (1.230286864s)
--- PASS: TestAddons/parallel/MetricsServer (6.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1115 09:09:16.999359  247445 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1115 09:09:17.018217  247445 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1115 09:09:17.018244  247445 kapi.go:107] duration metric: took 18.909102ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 18.917831ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-663794 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-663794 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ec0026d5-801f-4061-b90e-3e604b09e948] Pending
helpers_test.go:352: "task-pv-pod" [ec0026d5-801f-4061-b90e-3e604b09e948] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ec0026d5-801f-4061-b90e-3e604b09e948] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004031287s
addons_test.go:572: (dbg) Run:  kubectl --context addons-663794 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-663794 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-663794 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-663794 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-663794 delete pod task-pv-pod: (1.208185948s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-663794 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-663794 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-663794 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [458dc5c7-ce53-4e18-80ba-32fb7c702082] Pending
helpers_test.go:352: "task-pv-pod-restore" [458dc5c7-ce53-4e18-80ba-32fb7c702082] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [458dc5c7-ce53-4e18-80ba-32fb7c702082] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004404324s
addons_test.go:614: (dbg) Run:  kubectl --context addons-663794 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-663794 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-663794 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable volumesnapshots --alsologtostderr -v=1: (1.001893387s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.853624988s)
--- PASS: TestAddons/parallel/CSI (58.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-663794 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-lm5sj" [a1a1f7a1-cd5b-4a2d-9032-1aca77d39328] Pending
helpers_test.go:352: "headlamp-6945c6f4d-lm5sj" [a1a1f7a1-cd5b-4a2d-9032-1aca77d39328] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-lm5sj" [a1a1f7a1-cd5b-4a2d-9032-1aca77d39328] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004426919s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable headlamp --alsologtostderr -v=1: (5.727897258s)
--- PASS: TestAddons/parallel/Headlamp (20.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-jhkp4" [b9695772-abf0-4cf5-bd8b-f79b94d313d6] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007289321s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-663794 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-663794 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-663794 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2b07115c-5b39-44ab-ab89-43f09d215e73] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2b07115c-5b39-44ab-ab89-43f09d215e73] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2b07115c-5b39-44ab-ab89-43f09d215e73] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.00543151s
addons_test.go:967: (dbg) Run:  kubectl --context addons-663794 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 ssh "cat /opt/local-path-provisioner/pvc-7cb226ef-cf3e-40d0-abc8-3408242d700f_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-663794 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-663794 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-tz8vm" [7fa140f3-685f-4d2a-8467-05ffa2701601] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.049946922s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.92s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-gxtsc" [42710222-68f3-4930-a7df-c6b0642f056b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003317344s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable yakd --alsologtostderr -v=1: (5.932395554s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (77.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-663794
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-663794: (1m16.891247148s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-663794
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-663794
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-663794
--- PASS: TestAddons/StoppedEnableDisable (77.10s)

                                                
                                    
x
+
TestCertOptions (79.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-253271 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-253271 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m18.732373286s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-253271 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-253271 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-253271 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-253271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-253271
--- PASS: TestCertOptions (79.99s)

                                                
                                    
x
+
TestCertExpiration (298.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-710011 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-710011 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m0.740988448s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-710011 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E1115 10:03:33.566363  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:03:50.487818  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-710011 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (56.555770699s)
helpers_test.go:175: Cleaning up "cert-expiration-710011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-710011
--- PASS: TestCertExpiration (298.28s)

                                                
                                    
x
+
TestForceSystemdFlag (40.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-632322 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-632322 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (39.55533888s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-632322 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-632322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-632322
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-632322: (1.081111455s)
--- PASS: TestForceSystemdFlag (40.84s)

                                                
                                    
x
+
TestForceSystemdEnv (58.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-491908 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-491908 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.364478327s)
helpers_test.go:175: Cleaning up "force-systemd-env-491908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-491908
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-491908: (1.091002135s)
--- PASS: TestForceSystemdEnv (58.46s)

                                                
                                    
x
+
TestErrorSpam/setup (36.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-581339 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-581339 --driver=kvm2  --container-runtime=crio
E1115 09:13:50.493850  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:50.500368  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:50.511824  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:50.533383  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:50.574945  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:50.656558  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:50.818181  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:51.139975  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:51.782167  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:53.063716  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:13:55.626689  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:14:00.748684  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-581339 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-581339 --driver=kvm2  --container-runtime=crio: (36.628739549s)
--- PASS: TestErrorSpam/setup (36.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (5.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 stop: (1.799988975s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 stop: (2.057848265s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-581339 --log_dir /tmp/nospam-581339 stop: (1.987638548s)
--- PASS: TestErrorSpam/stop (5.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21895-243545/.minikube/files/etc/test/nested/copy/247445/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471384 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1115 09:14:31.472201  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-471384 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.44588329s)
--- PASS: TestFunctional/serial/StartWithProxy (58.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1115 09:15:09.856304  247445 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471384 --alsologtostderr -v=8
E1115 09:15:12.434722  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-471384 --alsologtostderr -v=8: (53.223792541s)
functional_test.go:678: soft start took 53.224414683s for "functional-471384" cluster.
I1115 09:16:03.080578  247445 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (53.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-471384 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 cache add registry.k8s.io/pause:3.1: (1.579867567s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 cache add registry.k8s.io/pause:3.3: (1.662242443s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 cache add registry.k8s.io/pause:latest: (1.590889856s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-471384 /tmp/TestFunctionalserialCacheCmdcacheadd_local24405015/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cache add minikube-local-cache-test:functional-471384
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 cache add minikube-local-cache-test:functional-471384: (2.258450471s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cache delete minikube-local-cache-test:functional-471384
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-471384
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (184.066785ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 cache reload: (1.511300589s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 kubectl -- --context functional-471384 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-471384 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471384 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1115 09:16:34.356652  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-471384 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.821616726s)
functional_test.go:776: restart took 34.821756685s for "functional-471384" cluster.
I1115 09:16:48.323611  247445 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (34.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-471384 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 logs: (1.400344639s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 logs --file /tmp/TestFunctionalserialLogsFileCmd2789468923/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 logs --file /tmp/TestFunctionalserialLogsFileCmd2789468923/001/logs.txt: (1.35752045s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-471384 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-471384
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-471384: exit status 115 (232.158992ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.107:31660 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-471384 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 config get cpus: exit status 14 (73.762585ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 config get cpus: exit status 14 (65.350319ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-471384 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-471384 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 253664: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471384 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-471384 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.116953ms)

                                                
                                                
-- stdout --
	* [functional-471384] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:17:22.653352  253620 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:17:22.653561  253620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:17:22.653570  253620 out.go:374] Setting ErrFile to fd 2...
	I1115 09:17:22.653576  253620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:17:22.653927  253620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:17:22.654427  253620 out.go:368] Setting JSON to false
	I1115 09:17:22.655403  253620 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7185,"bootTime":1763191058,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:17:22.655609  253620 start.go:143] virtualization: kvm guest
	I1115 09:17:22.657291  253620 out.go:179] * [functional-471384] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:17:22.658630  253620 notify.go:221] Checking for updates...
	I1115 09:17:22.658685  253620 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:17:22.659928  253620 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:17:22.661194  253620 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:17:22.662490  253620 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:17:22.663789  253620 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:17:22.664996  253620 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:17:22.666761  253620 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:17:22.667384  253620 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:17:22.732344  253620 out.go:179] * Using the kvm2 driver based on existing profile
	I1115 09:17:22.733934  253620 start.go:309] selected driver: kvm2
	I1115 09:17:22.733977  253620 start.go:930] validating driver "kvm2" against &{Name:functional-471384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-471384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:17:22.734214  253620 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:17:22.736527  253620 out.go:203] 
	W1115 09:17:22.737869  253620 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1115 09:17:22.739278  253620 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471384 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471384 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-471384 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (119.172909ms)

                                                
                                                
-- stdout --
	* [functional-471384] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:17:21.852011  253570 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:17:21.852318  253570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:17:21.852328  253570 out.go:374] Setting ErrFile to fd 2...
	I1115 09:17:21.852333  253570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:17:21.852640  253570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:17:21.853059  253570 out.go:368] Setting JSON to false
	I1115 09:17:21.853992  253570 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7184,"bootTime":1763191058,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:17:21.854095  253570 start.go:143] virtualization: kvm guest
	I1115 09:17:21.855594  253570 out.go:179] * [functional-471384] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1115 09:17:21.856922  253570 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:17:21.856936  253570 notify.go:221] Checking for updates...
	I1115 09:17:21.859260  253570 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:17:21.860919  253570 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:17:21.862088  253570 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:17:21.863276  253570 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:17:21.864373  253570 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:17:21.866047  253570 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:17:21.866479  253570 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:17:21.896761  253570 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1115 09:17:21.898022  253570 start.go:309] selected driver: kvm2
	I1115 09:17:21.898036  253570 start.go:930] validating driver "kvm2" against &{Name:functional-471384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-471384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:17:21.898137  253570 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:17:21.899978  253570 out.go:203] 
	W1115 09:17:21.901037  253570 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:17:21.902047  253570 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-471384 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-471384 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9bpm9" [4d7719b9-efa7-4b32-8e60-c0cc4f1b2101] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-9bpm9" [4d7719b9-efa7-4b32-8e60-c0cc4f1b2101] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.003979621s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.107:30487
functional_test.go:1680: http://192.168.39.107:30487: success! body:
Request served by hello-node-connect-7d85dfc575-9bpm9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.107:30487
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.36s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [41f8657f-f8ad-4d33-8454-f4bb4a9ac6a0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004830679s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-471384 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-471384 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-471384 get pvc myclaim -o=json
I1115 09:17:02.441077  247445 retry.go:31] will retry after 2.597718156s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2b2ddbcb-8917-4e42-a34f-d269f7067324 ResourceVersion:672 Generation:0 CreationTimestamp:2025-11-15 09:17:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0016b0870 VolumeMode:0xc0016b0880 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-471384 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-471384 apply -f testdata/storage-provisioner/pod.yaml
I1115 09:17:05.255688  247445 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e904c7bd-8700-4682-9a28-12ec3dde54ba] Pending
helpers_test.go:352: "sp-pod" [e904c7bd-8700-4682-9a28-12ec3dde54ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e904c7bd-8700-4682-9a28-12ec3dde54ba] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003996693s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-471384 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-471384 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-471384 apply -f testdata/storage-provisioner/pod.yaml
I1115 09:17:27.308902  247445 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e486778f-8d5b-4883-b9a3-f520c3c0d153] Pending
helpers_test.go:352: "sp-pod" [e486778f-8d5b-4883-b9a3-f520c3c0d153] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e486778f-8d5b-4883-b9a3-f520c3c0d153] Running
2025/11/15 09:17:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003244322s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-471384 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh -n functional-471384 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cp functional-471384:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd276113686/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh -n functional-471384 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh -n functional-471384 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-471384 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-qj7m2" [fa7834b4-f4ef-4a78-83a6-0f5d9acfc15a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-qj7m2" [fa7834b4-f4ef-4a78-83a6-0f5d9acfc15a] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.006263084s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-471384 exec mysql-5bb876957f-qj7m2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-471384 exec mysql-5bb876957f-qj7m2 -- mysql -ppassword -e "show databases;": exit status 1 (378.584602ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:17:15.633868  247445 retry.go:31] will retry after 533.59656ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-471384 exec mysql-5bb876957f-qj7m2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-471384 exec mysql-5bb876957f-qj7m2 -- mysql -ppassword -e "show databases;": exit status 1 (390.519201ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:17:16.558381  247445 retry.go:31] will retry after 1.688420308s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-471384 exec mysql-5bb876957f-qj7m2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-471384 exec mysql-5bb876957f-qj7m2 -- mysql -ppassword -e "show databases;": exit status 1 (147.292741ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:17:18.395089  247445 retry.go:31] will retry after 2.216849975s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-471384 exec mysql-5bb876957f-qj7m2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/247445/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo cat /etc/test/nested/copy/247445/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/247445.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo cat /etc/ssl/certs/247445.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/247445.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo cat /usr/share/ca-certificates/247445.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2474452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo cat /etc/ssl/certs/2474452.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2474452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo cat /usr/share/ca-certificates/2474452.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-471384 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 ssh "sudo systemctl is-active docker": exit status 1 (164.326143ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 ssh "sudo systemctl is-active containerd": exit status 1 (162.715803ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471384 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-471384
localhost/kicbase/echo-server:functional-471384
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471384 image ls --format short --alsologtostderr:
I1115 09:17:30.453278  253836 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:30.453376  253836 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:30.453383  253836 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:30.453395  253836 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:30.453628  253836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
I1115 09:17:30.454184  253836 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:30.454277  253836 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:30.456224  253836 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:30.458342  253836 main.go:143] libmachine: domain functional-471384 has defined MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:30.458724  253836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:a5:e5", ip: ""} in network mk-functional-471384: {Iface:virbr1 ExpiryTime:2025-11-15 10:14:26 +0000 UTC Type:0 Mac:52:54:00:2b:a5:e5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-471384 Clientid:01:52:54:00:2b:a5:e5}
I1115 09:17:30.458748  253836 main.go:143] libmachine: domain functional-471384 has defined IP address 192.168.39.107 and MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:30.458862  253836 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/functional-471384/id_rsa Username:docker}
I1115 09:17:30.561634  253836 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471384 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-471384  │ 5201c1cd60173 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-471384  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ latest             │ d261fd19cb632 │ 155MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471384 image ls --format table --alsologtostderr:
I1115 09:17:33.815652  253933 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:33.815893  253933 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:33.815901  253933 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:33.815905  253933 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:33.816075  253933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
I1115 09:17:33.816623  253933 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:33.816722  253933 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:33.818900  253933 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:33.821268  253933 main.go:143] libmachine: domain functional-471384 has defined MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:33.821733  253933 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:a5:e5", ip: ""} in network mk-functional-471384: {Iface:virbr1 ExpiryTime:2025-11-15 10:14:26 +0000 UTC Type:0 Mac:52:54:00:2b:a5:e5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-471384 Clientid:01:52:54:00:2b:a5:e5}
I1115 09:17:33.821760  253933 main.go:143] libmachine: domain functional-471384 has defined IP address 192.168.39.107 and MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:33.821913  253933 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/functional-471384/id_rsa Username:docker}
I1115 09:17:33.911884  253933 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471384 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags
":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"5201c1cd60173a3d411629be4dd46ffdf7dfff11c469c6153ae0d1844d071db0","repoDigests":["localhost/minikube-local-cache-test@sha256:2484becd8d7646c6a737875ab9a169a9316aa4c14fe501f08c09a485f6055715"],"repoTags":["localhost/minikube-local-cache-test:funct
ional-471384"],"size":"3330"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"7
46911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-471384"],"size":"4943877"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a
6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["regi
stry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1d
ddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471384 image ls --format json --alsologtostderr:
I1115 09:17:33.590903  253922 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:33.591011  253922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:33.591020  253922 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:33.591024  253922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:33.591233  253922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
I1115 09:17:33.591782  253922 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:33.591880  253922 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:33.594006  253922 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:33.596377  253922 main.go:143] libmachine: domain functional-471384 has defined MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:33.596919  253922 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:a5:e5", ip: ""} in network mk-functional-471384: {Iface:virbr1 ExpiryTime:2025-11-15 10:14:26 +0000 UTC Type:0 Mac:52:54:00:2b:a5:e5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-471384 Clientid:01:52:54:00:2b:a5:e5}
I1115 09:17:33.596946  253922 main.go:143] libmachine: domain functional-471384 has defined IP address 192.168.39.107 and MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:33.597129  253922 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/functional-471384/id_rsa Username:docker}
I1115 09:17:33.690420  253922 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471384 image ls --format yaml --alsologtostderr:
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-471384
size: "4943877"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5201c1cd60173a3d411629be4dd46ffdf7dfff11c469c6153ae0d1844d071db0
repoDigests:
- localhost/minikube-local-cache-test@sha256:2484becd8d7646c6a737875ab9a169a9316aa4c14fe501f08c09a485f6055715
repoTags:
- localhost/minikube-local-cache-test:functional-471384
size: "3330"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471384 image ls --format yaml --alsologtostderr:
I1115 09:17:30.698188  253847 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:30.698550  253847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:30.698565  253847 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:30.698572  253847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:30.698904  253847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
I1115 09:17:30.699819  253847 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:30.699998  253847 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:30.702652  253847 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:30.705575  253847 main.go:143] libmachine: domain functional-471384 has defined MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:30.706091  253847 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:a5:e5", ip: ""} in network mk-functional-471384: {Iface:virbr1 ExpiryTime:2025-11-15 10:14:26 +0000 UTC Type:0 Mac:52:54:00:2b:a5:e5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-471384 Clientid:01:52:54:00:2b:a5:e5}
I1115 09:17:30.706129  253847 main.go:143] libmachine: domain functional-471384 has defined IP address 192.168.39.107 and MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:30.706339  253847 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/functional-471384/id_rsa Username:docker}
I1115 09:17:30.804871  253847 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 ssh pgrep buildkitd: exit status 1 (181.284983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image build -t localhost/my-image:functional-471384 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 image build -t localhost/my-image:functional-471384 testdata/build --alsologtostderr: (4.657105832s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471384 image build -t localhost/my-image:functional-471384 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 56cbf48105d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-471384
--> 86378c09e68
Successfully tagged localhost/my-image:functional-471384
86378c09e68d7b2edb5539ca3c2f59f04b9b14205f91b38ed83fbc525b4cd929
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471384 image build -t localhost/my-image:functional-471384 testdata/build --alsologtostderr:
I1115 09:17:31.140227  253868 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:31.140456  253868 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:31.140475  253868 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:31.140483  253868 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:31.140742  253868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
I1115 09:17:31.141661  253868 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:31.142574  253868 config.go:182] Loaded profile config "functional-471384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:31.144799  253868 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:31.146884  253868 main.go:143] libmachine: domain functional-471384 has defined MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:31.147257  253868 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:a5:e5", ip: ""} in network mk-functional-471384: {Iface:virbr1 ExpiryTime:2025-11-15 10:14:26 +0000 UTC Type:0 Mac:52:54:00:2b:a5:e5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-471384 Clientid:01:52:54:00:2b:a5:e5}
I1115 09:17:31.147285  253868 main.go:143] libmachine: domain functional-471384 has defined IP address 192.168.39.107 and MAC address 52:54:00:2b:a5:e5 in network mk-functional-471384
I1115 09:17:31.147405  253868 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/functional-471384/id_rsa Username:docker}
I1115 09:17:31.242473  253868 build_images.go:162] Building image from path: /tmp/build.2648215316.tar
I1115 09:17:31.242548  253868 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1115 09:17:31.261183  253868 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2648215316.tar
I1115 09:17:31.266732  253868 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2648215316.tar: stat -c "%s %y" /var/lib/minikube/build/build.2648215316.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2648215316.tar': No such file or directory
I1115 09:17:31.266763  253868 ssh_runner.go:362] scp /tmp/build.2648215316.tar --> /var/lib/minikube/build/build.2648215316.tar (3072 bytes)
I1115 09:17:31.325873  253868 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2648215316
I1115 09:17:31.358061  253868 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2648215316 -xf /var/lib/minikube/build/build.2648215316.tar
I1115 09:17:31.376734  253868 crio.go:315] Building image: /var/lib/minikube/build/build.2648215316
I1115 09:17:31.376833  253868 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-471384 /var/lib/minikube/build/build.2648215316 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1115 09:17:35.694071  253868 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-471384 /var/lib/minikube/build/build.2648215316 --cgroup-manager=cgroupfs: (4.317209114s)
I1115 09:17:35.694153  253868 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2648215316
I1115 09:17:35.710599  253868 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2648215316.tar
I1115 09:17:35.722038  253868 build_images.go:218] Built localhost/my-image:functional-471384 from /tmp/build.2648215316.tar
I1115 09:17:35.722080  253868 build_images.go:134] succeeded building to: functional-471384
I1115 09:17:35.722086  253868 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.716984324s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-471384
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image load --daemon kicbase/echo-server:functional-471384 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 image load --daemon kicbase/echo-server:functional-471384 --alsologtostderr: (1.130471219s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdany-port1501152171/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763198217467879095" to /tmp/TestFunctionalparallelMountCmdany-port1501152171/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763198217467879095" to /tmp/TestFunctionalparallelMountCmdany-port1501152171/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763198217467879095" to /tmp/TestFunctionalparallelMountCmdany-port1501152171/001/test-1763198217467879095
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (169.815425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:16:57.638021  247445 retry.go:31] will retry after 598.027506ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 15 09:16 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 15 09:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 15 09:16 test-1763198217467879095
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh cat /mount-9p/test-1763198217467879095
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-471384 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [64e920ff-6ffd-45a6-9b9c-8b1e0cd62c8d] Pending
helpers_test.go:352: "busybox-mount" [64e920ff-6ffd-45a6-9b9c-8b1e0cd62c8d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [64e920ff-6ffd-45a6-9b9c-8b1e0cd62c8d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [64e920ff-6ffd-45a6-9b9c-8b1e0cd62c8d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.00751423s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-471384 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdany-port1501152171/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image load --daemon kicbase/echo-server:functional-471384 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-471384
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image load --daemon kicbase/echo-server:functional-471384 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image save kicbase/echo-server:functional-471384 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 image save kicbase/echo-server:functional-471384 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.868426527s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image rm kicbase/echo-server:functional-471384 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-471384
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 image save --daemon kicbase/echo-server:functional-471384 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-471384
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdspecific-port3875926314/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.481978ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:17:15.935096  247445 retry.go:31] will retry after 568.614007ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdspecific-port3875926314/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 ssh "sudo umount -f /mount-9p": exit status 1 (187.26769ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-471384 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdspecific-port3875926314/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4270194409/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4270194409/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4270194409/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T" /mount1: exit status 1 (233.781076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:17:17.531916  247445 retry.go:31] will retry after 611.164112ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-471384 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4270194409/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4270194409/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471384 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4270194409/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-471384 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-471384 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-vrb77" [e2b2cfdd-5a88-4a2d-9f28-32b0337bda85] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-vrb77" [e2b2cfdd-5a88-4a2d-9f28-32b0337bda85] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.011418202s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "253.593645ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.72815ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "272.789007ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.712174ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 service list: (1.267235647s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-471384 service list -o json: (1.392829797s)
functional_test.go:1504: Took "1.392929065s" to run "out/minikube-linux-amd64 -p functional-471384 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.107:30785
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-471384 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.107:30785
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-471384
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-471384
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-471384
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1115 09:18:50.487677  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:19:18.198385  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m24.463022778s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (205.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 kubectl -- rollout status deployment/busybox: (5.092976235s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-57vx6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-c5q9b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-lqds9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-57vx6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-c5q9b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-lqds9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-57vx6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-c5q9b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-lqds9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-57vx6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-57vx6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-c5q9b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-c5q9b -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-lqds9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 kubectl -- exec busybox-7b57f96db7-lqds9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 node add --alsologtostderr -v 5
E1115 09:21:56.248758  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:56.255246  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:56.266616  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:56.288055  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:56.329497  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:56.411128  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:56.572754  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:56.894329  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:57.536529  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:58.818037  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 node add --alsologtostderr -v 5: (44.129893959s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5
E1115 09:22:01.379498  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-824346 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp testdata/cp-test.txt ha-824346:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1408840396/001/cp-test_ha-824346.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346:/home/docker/cp-test.txt ha-824346-m02:/home/docker/cp-test_ha-824346_ha-824346-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m02 "sudo cat /home/docker/cp-test_ha-824346_ha-824346-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346:/home/docker/cp-test.txt ha-824346-m03:/home/docker/cp-test_ha-824346_ha-824346-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m03 "sudo cat /home/docker/cp-test_ha-824346_ha-824346-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346:/home/docker/cp-test.txt ha-824346-m04:/home/docker/cp-test_ha-824346_ha-824346-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m04 "sudo cat /home/docker/cp-test_ha-824346_ha-824346-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp testdata/cp-test.txt ha-824346-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1408840396/001/cp-test_ha-824346-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m02 "sudo cat /home/docker/cp-test.txt"
E1115 09:22:06.501691  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m02:/home/docker/cp-test.txt ha-824346:/home/docker/cp-test_ha-824346-m02_ha-824346.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346 "sudo cat /home/docker/cp-test_ha-824346-m02_ha-824346.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m02:/home/docker/cp-test.txt ha-824346-m03:/home/docker/cp-test_ha-824346-m02_ha-824346-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m03 "sudo cat /home/docker/cp-test_ha-824346-m02_ha-824346-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m02:/home/docker/cp-test.txt ha-824346-m04:/home/docker/cp-test_ha-824346-m02_ha-824346-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m04 "sudo cat /home/docker/cp-test_ha-824346-m02_ha-824346-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp testdata/cp-test.txt ha-824346-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1408840396/001/cp-test_ha-824346-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m03:/home/docker/cp-test.txt ha-824346:/home/docker/cp-test_ha-824346-m03_ha-824346.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346 "sudo cat /home/docker/cp-test_ha-824346-m03_ha-824346.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m03:/home/docker/cp-test.txt ha-824346-m02:/home/docker/cp-test_ha-824346-m03_ha-824346-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m02 "sudo cat /home/docker/cp-test_ha-824346-m03_ha-824346-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m03:/home/docker/cp-test.txt ha-824346-m04:/home/docker/cp-test_ha-824346-m03_ha-824346-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m04 "sudo cat /home/docker/cp-test_ha-824346-m03_ha-824346-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp testdata/cp-test.txt ha-824346-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1408840396/001/cp-test_ha-824346-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m04:/home/docker/cp-test.txt ha-824346:/home/docker/cp-test_ha-824346-m04_ha-824346.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346 "sudo cat /home/docker/cp-test_ha-824346-m04_ha-824346.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m04:/home/docker/cp-test.txt ha-824346-m02:/home/docker/cp-test_ha-824346-m04_ha-824346-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m02 "sudo cat /home/docker/cp-test_ha-824346-m04_ha-824346-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 cp ha-824346-m04:/home/docker/cp-test.txt ha-824346-m03:/home/docker/cp-test_ha-824346-m04_ha-824346-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 ssh -n ha-824346-m03 "sudo cat /home/docker/cp-test_ha-824346-m04_ha-824346-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 node stop m02 --alsologtostderr -v 5
E1115 09:22:16.743253  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:22:37.224685  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:23:18.187097  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 node stop m02 --alsologtostderr -v 5: (1m22.290186702s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5: exit status 7 (506.941973ms)

                                                
                                                
-- stdout --
	ha-824346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-824346-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-824346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-824346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:23:35.802677  257381 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:23:35.802921  257381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:23:35.802932  257381 out.go:374] Setting ErrFile to fd 2...
	I1115 09:23:35.802938  257381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:23:35.803279  257381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:23:35.803509  257381 out.go:368] Setting JSON to false
	I1115 09:23:35.803550  257381 mustload.go:66] Loading cluster: ha-824346
	I1115 09:23:35.803737  257381 notify.go:221] Checking for updates...
	I1115 09:23:35.804214  257381 config.go:182] Loaded profile config "ha-824346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:23:35.804236  257381 status.go:174] checking status of ha-824346 ...
	I1115 09:23:35.806544  257381 status.go:371] ha-824346 host status = "Running" (err=<nil>)
	I1115 09:23:35.806562  257381 host.go:66] Checking if "ha-824346" exists ...
	I1115 09:23:35.809929  257381 main.go:143] libmachine: domain ha-824346 has defined MAC address 52:54:00:27:ec:9b in network mk-ha-824346
	I1115 09:23:35.810501  257381 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:ec:9b", ip: ""} in network mk-ha-824346: {Iface:virbr1 ExpiryTime:2025-11-15 10:17:58 +0000 UTC Type:0 Mac:52:54:00:27:ec:9b Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-824346 Clientid:01:52:54:00:27:ec:9b}
	I1115 09:23:35.810537  257381 main.go:143] libmachine: domain ha-824346 has defined IP address 192.168.39.98 and MAC address 52:54:00:27:ec:9b in network mk-ha-824346
	I1115 09:23:35.810710  257381 host.go:66] Checking if "ha-824346" exists ...
	I1115 09:23:35.810975  257381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:23:35.813429  257381 main.go:143] libmachine: domain ha-824346 has defined MAC address 52:54:00:27:ec:9b in network mk-ha-824346
	I1115 09:23:35.813878  257381 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:ec:9b", ip: ""} in network mk-ha-824346: {Iface:virbr1 ExpiryTime:2025-11-15 10:17:58 +0000 UTC Type:0 Mac:52:54:00:27:ec:9b Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-824346 Clientid:01:52:54:00:27:ec:9b}
	I1115 09:23:35.813922  257381 main.go:143] libmachine: domain ha-824346 has defined IP address 192.168.39.98 and MAC address 52:54:00:27:ec:9b in network mk-ha-824346
	I1115 09:23:35.814084  257381 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/ha-824346/id_rsa Username:docker}
	I1115 09:23:35.903585  257381 ssh_runner.go:195] Run: systemctl --version
	I1115 09:23:35.911564  257381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:23:35.928790  257381 kubeconfig.go:125] found "ha-824346" server: "https://192.168.39.254:8443"
	I1115 09:23:35.928829  257381 api_server.go:166] Checking apiserver status ...
	I1115 09:23:35.928871  257381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:23:35.948236  257381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W1115 09:23:35.959570  257381 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:23:35.959625  257381 ssh_runner.go:195] Run: ls
	I1115 09:23:35.965194  257381 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1115 09:23:35.970184  257381 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1115 09:23:35.970210  257381 status.go:463] ha-824346 apiserver status = Running (err=<nil>)
	I1115 09:23:35.970222  257381 status.go:176] ha-824346 status: &{Name:ha-824346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:23:35.970244  257381 status.go:174] checking status of ha-824346-m02 ...
	I1115 09:23:35.971720  257381 status.go:371] ha-824346-m02 host status = "Stopped" (err=<nil>)
	I1115 09:23:35.971738  257381 status.go:384] host is not running, skipping remaining checks
	I1115 09:23:35.971745  257381 status.go:176] ha-824346-m02 status: &{Name:ha-824346-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:23:35.971762  257381 status.go:174] checking status of ha-824346-m03 ...
	I1115 09:23:35.972951  257381 status.go:371] ha-824346-m03 host status = "Running" (err=<nil>)
	I1115 09:23:35.972968  257381 host.go:66] Checking if "ha-824346-m03" exists ...
	I1115 09:23:35.975288  257381 main.go:143] libmachine: domain ha-824346-m03 has defined MAC address 52:54:00:34:b8:49 in network mk-ha-824346
	I1115 09:23:35.975645  257381 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:b8:49", ip: ""} in network mk-ha-824346: {Iface:virbr1 ExpiryTime:2025-11-15 10:19:57 +0000 UTC Type:0 Mac:52:54:00:34:b8:49 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:ha-824346-m03 Clientid:01:52:54:00:34:b8:49}
	I1115 09:23:35.975684  257381 main.go:143] libmachine: domain ha-824346-m03 has defined IP address 192.168.39.129 and MAC address 52:54:00:34:b8:49 in network mk-ha-824346
	I1115 09:23:35.975844  257381 host.go:66] Checking if "ha-824346-m03" exists ...
	I1115 09:23:35.976062  257381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:23:35.978660  257381 main.go:143] libmachine: domain ha-824346-m03 has defined MAC address 52:54:00:34:b8:49 in network mk-ha-824346
	I1115 09:23:35.979191  257381 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:b8:49", ip: ""} in network mk-ha-824346: {Iface:virbr1 ExpiryTime:2025-11-15 10:19:57 +0000 UTC Type:0 Mac:52:54:00:34:b8:49 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:ha-824346-m03 Clientid:01:52:54:00:34:b8:49}
	I1115 09:23:35.979219  257381 main.go:143] libmachine: domain ha-824346-m03 has defined IP address 192.168.39.129 and MAC address 52:54:00:34:b8:49 in network mk-ha-824346
	I1115 09:23:35.979422  257381 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/ha-824346-m03/id_rsa Username:docker}
	I1115 09:23:36.068825  257381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:23:36.089822  257381 kubeconfig.go:125] found "ha-824346" server: "https://192.168.39.254:8443"
	I1115 09:23:36.089850  257381 api_server.go:166] Checking apiserver status ...
	I1115 09:23:36.089881  257381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:23:36.110270  257381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1779/cgroup
	W1115 09:23:36.122484  257381 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1779/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:23:36.122547  257381 ssh_runner.go:195] Run: ls
	I1115 09:23:36.127853  257381 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1115 09:23:36.132795  257381 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1115 09:23:36.132822  257381 status.go:463] ha-824346-m03 apiserver status = Running (err=<nil>)
	I1115 09:23:36.132831  257381 status.go:176] ha-824346-m03 status: &{Name:ha-824346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:23:36.132857  257381 status.go:174] checking status of ha-824346-m04 ...
	I1115 09:23:36.134524  257381 status.go:371] ha-824346-m04 host status = "Running" (err=<nil>)
	I1115 09:23:36.134547  257381 host.go:66] Checking if "ha-824346-m04" exists ...
	I1115 09:23:36.136878  257381 main.go:143] libmachine: domain ha-824346-m04 has defined MAC address 52:54:00:61:c7:a7 in network mk-ha-824346
	I1115 09:23:36.137266  257381 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:c7:a7", ip: ""} in network mk-ha-824346: {Iface:virbr1 ExpiryTime:2025-11-15 10:21:33 +0000 UTC Type:0 Mac:52:54:00:61:c7:a7 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:ha-824346-m04 Clientid:01:52:54:00:61:c7:a7}
	I1115 09:23:36.137286  257381 main.go:143] libmachine: domain ha-824346-m04 has defined IP address 192.168.39.85 and MAC address 52:54:00:61:c7:a7 in network mk-ha-824346
	I1115 09:23:36.137424  257381 host.go:66] Checking if "ha-824346-m04" exists ...
	I1115 09:23:36.137673  257381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:23:36.139886  257381 main.go:143] libmachine: domain ha-824346-m04 has defined MAC address 52:54:00:61:c7:a7 in network mk-ha-824346
	I1115 09:23:36.140221  257381 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:c7:a7", ip: ""} in network mk-ha-824346: {Iface:virbr1 ExpiryTime:2025-11-15 10:21:33 +0000 UTC Type:0 Mac:52:54:00:61:c7:a7 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:ha-824346-m04 Clientid:01:52:54:00:61:c7:a7}
	I1115 09:23:36.140243  257381 main.go:143] libmachine: domain ha-824346-m04 has defined IP address 192.168.39.85 and MAC address 52:54:00:61:c7:a7 in network mk-ha-824346
	I1115 09:23:36.140363  257381 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/ha-824346-m04/id_rsa Username:docker}
	I1115 09:23:36.225107  257381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:23:36.247719  257381 status.go:176] ha-824346-m04 status: &{Name:ha-824346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 node start m02 --alsologtostderr -v 5
E1115 09:23:50.487516  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 node start m02 --alsologtostderr -v 5: (33.514753864s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (364.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 stop --alsologtostderr -v 5
E1115 09:24:40.108513  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:26:56.248876  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:27:23.952064  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 stop --alsologtostderr -v 5: (4m8.155399635s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 start --wait true --alsologtostderr -v 5
E1115 09:28:50.489062  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:30:13.560689  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 start --wait true --alsologtostderr -v 5: (1m56.240248674s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (364.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 node delete m03 --alsologtostderr -v 5: (17.956194943s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (260.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 stop --alsologtostderr -v 5
E1115 09:31:56.254155  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:33:50.489355  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 stop --alsologtostderr -v 5: (4m20.089951579s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5: exit status 7 (64.998032ms)

                                                
                                                
-- stdout --
	ha-824346
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-824346-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-824346-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:34:56.038086  260593 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:34:56.038338  260593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:34:56.038346  260593 out.go:374] Setting ErrFile to fd 2...
	I1115 09:34:56.038351  260593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:34:56.038542  260593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:34:56.038710  260593 out.go:368] Setting JSON to false
	I1115 09:34:56.038744  260593 mustload.go:66] Loading cluster: ha-824346
	I1115 09:34:56.038890  260593 notify.go:221] Checking for updates...
	I1115 09:34:56.039104  260593 config.go:182] Loaded profile config "ha-824346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:34:56.039123  260593 status.go:174] checking status of ha-824346 ...
	I1115 09:34:56.041505  260593 status.go:371] ha-824346 host status = "Stopped" (err=<nil>)
	I1115 09:34:56.041521  260593 status.go:384] host is not running, skipping remaining checks
	I1115 09:34:56.041529  260593 status.go:176] ha-824346 status: &{Name:ha-824346 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:34:56.041556  260593 status.go:174] checking status of ha-824346-m02 ...
	I1115 09:34:56.042833  260593 status.go:371] ha-824346-m02 host status = "Stopped" (err=<nil>)
	I1115 09:34:56.042848  260593 status.go:384] host is not running, skipping remaining checks
	I1115 09:34:56.042853  260593 status.go:176] ha-824346-m02 status: &{Name:ha-824346-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:34:56.042867  260593 status.go:174] checking status of ha-824346-m04 ...
	I1115 09:34:56.044082  260593 status.go:371] ha-824346-m04 host status = "Stopped" (err=<nil>)
	I1115 09:34:56.044096  260593 status.go:384] host is not running, skipping remaining checks
	I1115 09:34:56.044100  260593 status.go:176] ha-824346-m04 status: &{Name:ha-824346-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (260.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (90.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m30.053418721s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (90.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 node add --control-plane --alsologtostderr -v 5
E1115 09:36:56.254340  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-824346 node add --control-plane --alsologtostderr -v 5: (1m18.494216942s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-824346 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-347518 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1115 09:38:19.315719  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-347518 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (54.464638969s)
--- PASS: TestJSONOutput/start/Command (54.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-347518 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-347518 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-347518 --output=json --user=testUser
E1115 09:38:50.490946  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-347518 --output=json --user=testUser: (6.876200502s)
--- PASS: TestJSONOutput/stop/Command (6.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-426448 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-426448 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.528679ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bf4406cb-49ca-4945-b7ac-28fea69a7ced","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-426448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3079422c-05a5-4af3-bbfe-17453198d122","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21895"}}
	{"specversion":"1.0","id":"a8d0009c-205c-446f-a176-ebf1a13f824a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4bf36484-0194-41d5-8ad0-ca70f83c95c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig"}}
	{"specversion":"1.0","id":"b1c46d6f-ecce-4345-b70b-49d8039ee544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube"}}
	{"specversion":"1.0","id":"78853094-59dc-4a17-95d8-eaa2ce36bfac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7de726fa-3ba6-4104-8f59-c201f7b88ba4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f23f25a6-1f8c-46ae-a395-ecaded014a70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-426448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-426448
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (76.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-575862 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-575862 --driver=kvm2  --container-runtime=crio: (36.565382037s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-578873 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-578873 --driver=kvm2  --container-runtime=crio: (37.208637623s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-575862
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-578873
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-578873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-578873
helpers_test.go:175: Cleaning up "first-575862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-575862
--- PASS: TestMinikubeProfile (76.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-000436 --memory=3072 --mount-string /tmp/TestMountStartserial984312356/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-000436 --memory=3072 --mount-string /tmp/TestMountStartserial984312356/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.38812522s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-000436 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-000436 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-023292 --memory=3072 --mount-string /tmp/TestMountStartserial984312356/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-023292 --memory=3072 --mount-string /tmp/TestMountStartserial984312356/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.260361605s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-023292 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-023292 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-000436 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-023292 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-023292 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-023292
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-023292: (1.212056539s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-023292
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-023292: (17.719593028s)
--- PASS: TestMountStart/serial/RestartStopped (18.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-023292 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-023292 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635899 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1115 09:41:56.249688  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-635899 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.329861623s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-635899 -- rollout status deployment/busybox: (5.596568594s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-gkl8r -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-jx29x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-gkl8r -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-jx29x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-gkl8r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-jx29x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-gkl8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-gkl8r -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-jx29x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635899 -- exec busybox-7b57f96db7-jx29x -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-635899 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-635899 -v=5 --alsologtostderr: (40.029612843s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.49s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-635899 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp testdata/cp-test.txt multinode-635899:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile699023754/001/cp-test_multinode-635899.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899:/home/docker/cp-test.txt multinode-635899-m02:/home/docker/cp-test_multinode-635899_multinode-635899-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m02 "sudo cat /home/docker/cp-test_multinode-635899_multinode-635899-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899:/home/docker/cp-test.txt multinode-635899-m03:/home/docker/cp-test_multinode-635899_multinode-635899-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m03 "sudo cat /home/docker/cp-test_multinode-635899_multinode-635899-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp testdata/cp-test.txt multinode-635899-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile699023754/001/cp-test_multinode-635899-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899-m02:/home/docker/cp-test.txt multinode-635899:/home/docker/cp-test_multinode-635899-m02_multinode-635899.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899 "sudo cat /home/docker/cp-test_multinode-635899-m02_multinode-635899.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899-m02:/home/docker/cp-test.txt multinode-635899-m03:/home/docker/cp-test_multinode-635899-m02_multinode-635899-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m03 "sudo cat /home/docker/cp-test_multinode-635899-m02_multinode-635899-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp testdata/cp-test.txt multinode-635899-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile699023754/001/cp-test_multinode-635899-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899-m03:/home/docker/cp-test.txt multinode-635899:/home/docker/cp-test_multinode-635899-m03_multinode-635899.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899 "sudo cat /home/docker/cp-test_multinode-635899-m03_multinode-635899.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 cp multinode-635899-m03:/home/docker/cp-test.txt multinode-635899-m02:/home/docker/cp-test_multinode-635899-m03_multinode-635899-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 ssh -n multinode-635899-m02 "sudo cat /home/docker/cp-test_multinode-635899-m03_multinode-635899-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-635899 node stop m03: (1.515524369s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-635899 status: exit status 7 (340.302688ms)

                                                
                                                
-- stdout --
	multinode-635899
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-635899-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-635899-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-635899 status --alsologtostderr: exit status 7 (325.20442ms)

                                                
                                                
-- stdout --
	multinode-635899
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-635899-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-635899-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:43:49.186571  266050 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:43:49.186674  266050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:43:49.186681  266050 out.go:374] Setting ErrFile to fd 2...
	I1115 09:43:49.186691  266050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:43:49.186899  266050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:43:49.187104  266050 out.go:368] Setting JSON to false
	I1115 09:43:49.187143  266050 mustload.go:66] Loading cluster: multinode-635899
	I1115 09:43:49.187244  266050 notify.go:221] Checking for updates...
	I1115 09:43:49.187613  266050 config.go:182] Loaded profile config "multinode-635899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:43:49.187631  266050 status.go:174] checking status of multinode-635899 ...
	I1115 09:43:49.189567  266050 status.go:371] multinode-635899 host status = "Running" (err=<nil>)
	I1115 09:43:49.189587  266050 host.go:66] Checking if "multinode-635899" exists ...
	I1115 09:43:49.192202  266050 main.go:143] libmachine: domain multinode-635899 has defined MAC address 52:54:00:ab:dd:c6 in network mk-multinode-635899
	I1115 09:43:49.192657  266050 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:c6", ip: ""} in network mk-multinode-635899: {Iface:virbr1 ExpiryTime:2025-11-15 10:41:27 +0000 UTC Type:0 Mac:52:54:00:ab:dd:c6 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:multinode-635899 Clientid:01:52:54:00:ab:dd:c6}
	I1115 09:43:49.192686  266050 main.go:143] libmachine: domain multinode-635899 has defined IP address 192.168.39.216 and MAC address 52:54:00:ab:dd:c6 in network mk-multinode-635899
	I1115 09:43:49.192834  266050 host.go:66] Checking if "multinode-635899" exists ...
	I1115 09:43:49.193106  266050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:43:49.195318  266050 main.go:143] libmachine: domain multinode-635899 has defined MAC address 52:54:00:ab:dd:c6 in network mk-multinode-635899
	I1115 09:43:49.195840  266050 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:c6", ip: ""} in network mk-multinode-635899: {Iface:virbr1 ExpiryTime:2025-11-15 10:41:27 +0000 UTC Type:0 Mac:52:54:00:ab:dd:c6 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:multinode-635899 Clientid:01:52:54:00:ab:dd:c6}
	I1115 09:43:49.195880  266050 main.go:143] libmachine: domain multinode-635899 has defined IP address 192.168.39.216 and MAC address 52:54:00:ab:dd:c6 in network mk-multinode-635899
	I1115 09:43:49.196107  266050 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/multinode-635899/id_rsa Username:docker}
	I1115 09:43:49.280175  266050 ssh_runner.go:195] Run: systemctl --version
	I1115 09:43:49.286320  266050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:43:49.303226  266050 kubeconfig.go:125] found "multinode-635899" server: "https://192.168.39.216:8443"
	I1115 09:43:49.303268  266050 api_server.go:166] Checking apiserver status ...
	I1115 09:43:49.303309  266050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:43:49.323250  266050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1340/cgroup
	W1115 09:43:49.337080  266050 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1340/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:43:49.337150  266050 ssh_runner.go:195] Run: ls
	I1115 09:43:49.342106  266050 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1115 09:43:49.346858  266050 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I1115 09:43:49.346884  266050 status.go:463] multinode-635899 apiserver status = Running (err=<nil>)
	I1115 09:43:49.346909  266050 status.go:176] multinode-635899 status: &{Name:multinode-635899 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:43:49.346936  266050 status.go:174] checking status of multinode-635899-m02 ...
	I1115 09:43:49.348620  266050 status.go:371] multinode-635899-m02 host status = "Running" (err=<nil>)
	I1115 09:43:49.348639  266050 host.go:66] Checking if "multinode-635899-m02" exists ...
	I1115 09:43:49.351305  266050 main.go:143] libmachine: domain multinode-635899-m02 has defined MAC address 52:54:00:cd:74:47 in network mk-multinode-635899
	I1115 09:43:49.351707  266050 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:74:47", ip: ""} in network mk-multinode-635899: {Iface:virbr1 ExpiryTime:2025-11-15 10:42:21 +0000 UTC Type:0 Mac:52:54:00:cd:74:47 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:multinode-635899-m02 Clientid:01:52:54:00:cd:74:47}
	I1115 09:43:49.351737  266050 main.go:143] libmachine: domain multinode-635899-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:cd:74:47 in network mk-multinode-635899
	I1115 09:43:49.351860  266050 host.go:66] Checking if "multinode-635899-m02" exists ...
	I1115 09:43:49.352079  266050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:43:49.354081  266050 main.go:143] libmachine: domain multinode-635899-m02 has defined MAC address 52:54:00:cd:74:47 in network mk-multinode-635899
	I1115 09:43:49.354523  266050 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:74:47", ip: ""} in network mk-multinode-635899: {Iface:virbr1 ExpiryTime:2025-11-15 10:42:21 +0000 UTC Type:0 Mac:52:54:00:cd:74:47 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:multinode-635899-m02 Clientid:01:52:54:00:cd:74:47}
	I1115 09:43:49.354547  266050 main.go:143] libmachine: domain multinode-635899-m02 has defined IP address 192.168.39.236 and MAC address 52:54:00:cd:74:47 in network mk-multinode-635899
	I1115 09:43:49.354694  266050 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/multinode-635899-m02/id_rsa Username:docker}
	I1115 09:43:49.435085  266050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:43:49.450819  266050 status.go:176] multinode-635899-m02 status: &{Name:multinode-635899-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:43:49.450875  266050 status.go:174] checking status of multinode-635899-m03 ...
	I1115 09:43:49.452670  266050 status.go:371] multinode-635899-m03 host status = "Stopped" (err=<nil>)
	I1115 09:43:49.452694  266050 status.go:384] host is not running, skipping remaining checks
	I1115 09:43:49.452702  266050 status.go:176] multinode-635899-m03 status: &{Name:multinode-635899-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 node start m03 -v=5 --alsologtostderr
E1115 09:43:50.487609  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-635899 node start m03 -v=5 --alsologtostderr: (40.788369522s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (306.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-635899
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-635899
E1115 09:46:53.564970  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:46:56.251114  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-635899: (2m56.027782686s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635899 --wait=true -v=5 --alsologtostderr
E1115 09:48:50.488630  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-635899 --wait=true -v=5 --alsologtostderr: (2m10.3986999s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-635899
--- PASS: TestMultiNode/serial/RestartKeepsNodes (306.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-635899 node delete m03: (2.176100883s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (172.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 stop
E1115 09:51:56.248898  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-635899 stop: (2m52.245246693s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-635899 status: exit status 7 (65.307047ms)

                                                
                                                
-- stdout --
	multinode-635899
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-635899-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-635899 status --alsologtostderr: exit status 7 (63.538934ms)

                                                
                                                
-- stdout --
	multinode-635899
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-635899-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:52:32.327107  268512 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:52:32.327373  268512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:52:32.327382  268512 out.go:374] Setting ErrFile to fd 2...
	I1115 09:52:32.327386  268512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:52:32.327623  268512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:52:32.327806  268512 out.go:368] Setting JSON to false
	I1115 09:52:32.327838  268512 mustload.go:66] Loading cluster: multinode-635899
	I1115 09:52:32.327925  268512 notify.go:221] Checking for updates...
	I1115 09:52:32.328261  268512 config.go:182] Loaded profile config "multinode-635899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:52:32.328279  268512 status.go:174] checking status of multinode-635899 ...
	I1115 09:52:32.330519  268512 status.go:371] multinode-635899 host status = "Stopped" (err=<nil>)
	I1115 09:52:32.330538  268512 status.go:384] host is not running, skipping remaining checks
	I1115 09:52:32.330545  268512 status.go:176] multinode-635899 status: &{Name:multinode-635899 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:52:32.330567  268512 status.go:174] checking status of multinode-635899-m02 ...
	I1115 09:52:32.331875  268512 status.go:371] multinode-635899-m02 host status = "Stopped" (err=<nil>)
	I1115 09:52:32.331891  268512 status.go:384] host is not running, skipping remaining checks
	I1115 09:52:32.331897  268512 status.go:176] multinode-635899-m02 status: &{Name:multinode-635899-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (172.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635899 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1115 09:53:50.489267  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-635899 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m22.72101234s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635899 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-635899
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635899-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-635899-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.101219ms)

                                                
                                                
-- stdout --
	* [multinode-635899-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-635899-m02' is duplicated with machine name 'multinode-635899-m02' in profile 'multinode-635899'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635899-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-635899-m03 --driver=kvm2  --container-runtime=crio: (37.646789403s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-635899
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-635899: exit status 80 (197.302551ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-635899 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-635899-m03 already exists in multinode-635899-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-635899-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.83s)

                                                
                                    
x
+
TestScheduledStopUnix (108.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-127530 --memory=3072 --driver=kvm2  --container-runtime=crio
E1115 09:56:56.253583  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-127530 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.818379376s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-127530 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:57:25.925472  270795 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:57:25.925776  270795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:25.925788  270795 out.go:374] Setting ErrFile to fd 2...
	I1115 09:57:25.925792  270795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:25.926037  270795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:57:25.926337  270795 out.go:368] Setting JSON to false
	I1115 09:57:25.926454  270795 mustload.go:66] Loading cluster: scheduled-stop-127530
	I1115 09:57:25.926789  270795 config.go:182] Loaded profile config "scheduled-stop-127530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:25.926873  270795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/config.json ...
	I1115 09:57:25.927078  270795 mustload.go:66] Loading cluster: scheduled-stop-127530
	I1115 09:57:25.927205  270795 config.go:182] Loaded profile config "scheduled-stop-127530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-127530 -n scheduled-stop-127530
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-127530 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:57:26.221313  270841 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:57:26.221575  270841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:26.221584  270841 out.go:374] Setting ErrFile to fd 2...
	I1115 09:57:26.221588  270841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:26.221784  270841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:57:26.222010  270841 out.go:368] Setting JSON to false
	I1115 09:57:26.222207  270841 daemonize_unix.go:73] killing process 270830 as it is an old scheduled stop
	I1115 09:57:26.222319  270841 mustload.go:66] Loading cluster: scheduled-stop-127530
	I1115 09:57:26.222810  270841 config.go:182] Loaded profile config "scheduled-stop-127530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:26.222902  270841 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/config.json ...
	I1115 09:57:26.223124  270841 mustload.go:66] Loading cluster: scheduled-stop-127530
	I1115 09:57:26.223245  270841 config.go:182] Loaded profile config "scheduled-stop-127530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1115 09:57:26.228283  247445 retry.go:31] will retry after 141.473µs: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.229486  247445 retry.go:31] will retry after 173.588µs: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.230672  247445 retry.go:31] will retry after 335.107µs: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.231804  247445 retry.go:31] will retry after 479.492µs: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.232948  247445 retry.go:31] will retry after 369.056µs: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.234075  247445 retry.go:31] will retry after 1.065962ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.235206  247445 retry.go:31] will retry after 1.221669ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.237420  247445 retry.go:31] will retry after 2.293277ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.240633  247445 retry.go:31] will retry after 2.412067ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.243831  247445 retry.go:31] will retry after 2.67155ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.247111  247445 retry.go:31] will retry after 5.249958ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.253483  247445 retry.go:31] will retry after 8.258057ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.262701  247445 retry.go:31] will retry after 16.388413ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.280023  247445 retry.go:31] will retry after 16.810004ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.297338  247445 retry.go:31] will retry after 21.84061ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
I1115 09:57:26.319631  247445 retry.go:31] will retry after 37.394611ms: open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-127530 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-127530 -n scheduled-stop-127530
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-127530
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-127530 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:57:51.937902  270990 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:57:51.938145  270990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:51.938153  270990 out.go:374] Setting ErrFile to fd 2...
	I1115 09:57:51.938158  270990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:51.938340  270990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:57:51.938577  270990 out.go:368] Setting JSON to false
	I1115 09:57:51.938661  270990 mustload.go:66] Loading cluster: scheduled-stop-127530
	I1115 09:57:51.939001  270990 config.go:182] Loaded profile config "scheduled-stop-127530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:51.939070  270990 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/scheduled-stop-127530/config.json ...
	I1115 09:57:51.939258  270990 mustload.go:66] Loading cluster: scheduled-stop-127530
	I1115 09:57:51.939350  270990 config.go:182] Loaded profile config "scheduled-stop-127530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-127530
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-127530: exit status 7 (60.576817ms)

                                                
                                                
-- stdout --
	scheduled-stop-127530
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-127530 -n scheduled-stop-127530
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-127530 -n scheduled-stop-127530: exit status 7 (60.386945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-127530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-127530
--- PASS: TestScheduledStopUnix (108.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (125.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3831234849 start -p running-upgrade-929833 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3831234849 start -p running-upgrade-929833 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m24.766478773s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-929833 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-929833 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.130554408s)
helpers_test.go:175: Cleaning up "running-upgrade-929833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-929833
--- PASS: TestRunningBinaryUpgrade (125.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (144.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-129083 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-129083 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.981339596s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-129083
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-129083: (2.058251461s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-129083 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-129083 status --format={{.Host}}: exit status 7 (72.604619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-129083 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-129083 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.896892143s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-129083 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-129083 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-129083 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.02481ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-129083] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-129083
	    minikube start -p kubernetes-upgrade-129083 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1290832 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-129083 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-129083 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-129083 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.959323626s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-129083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-129083
--- PASS: TestKubernetesUpgrade (144.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410124 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-410124 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (102.917367ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-410124] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410124 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1115 09:58:50.487579  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-410124 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m35.106988743s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-410124 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-547391 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-547391 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (139.974804ms)

                                                
                                                
-- stdout --
	* [false-547391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:59:23.728138  272584 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:59:23.728232  272584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:59:23.728237  272584 out.go:374] Setting ErrFile to fd 2...
	I1115 09:59:23.728241  272584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:59:23.728465  272584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
	I1115 09:59:23.728979  272584 out.go:368] Setting JSON to false
	I1115 09:59:23.729892  272584 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9706,"bootTime":1763191058,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:59:23.729998  272584 start.go:143] virtualization: kvm guest
	I1115 09:59:23.732008  272584 out.go:179] * [false-547391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:59:23.733436  272584 notify.go:221] Checking for updates...
	I1115 09:59:23.733466  272584 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:59:23.734927  272584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:59:23.736298  272584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
	I1115 09:59:23.737725  272584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
	I1115 09:59:23.738931  272584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:59:23.740007  272584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:59:23.742001  272584 config.go:182] Loaded profile config "NoKubernetes-410124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:59:23.742178  272584 config.go:182] Loaded profile config "force-systemd-env-491908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:59:23.742333  272584 config.go:182] Loaded profile config "offline-crio-379010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:59:23.742519  272584 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:59:23.781739  272584 out.go:179] * Using the kvm2 driver based on user configuration
	I1115 09:59:23.782913  272584 start.go:309] selected driver: kvm2
	I1115 09:59:23.782929  272584 start.go:930] validating driver "kvm2" against <nil>
	I1115 09:59:23.782942  272584 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:59:23.784827  272584 out.go:203] 
	W1115 09:59:23.785967  272584 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1115 09:59:23.787049  272584 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-547391 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-547391" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-547391

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547391"

                                                
                                                
----------------------- debugLogs end: false-547391 [took: 4.251117003s] --------------------------------
helpers_test.go:175: Cleaning up "false-547391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-547391
--- PASS: TestNetworkPlugins/group/false (4.55s)

                                                
                                    
x
+
TestISOImage/Setup (43.27s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-620749 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-620749 --no-kubernetes --driver=kvm2  --container-runtime=crio: (43.266752445s)
--- PASS: TestISOImage/Setup (43.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (49.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410124 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-410124 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.342398254s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-410124 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-410124 status -o json: exit status 2 (213.125238ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-410124","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-410124
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (49.54s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which socat"
E1115 10:06:56.248832  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.3s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410124 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-410124 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.753139605s)
--- PASS: TestNoKubernetes/serial/Start (40.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21895-243545/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-410124 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-410124 "sudo systemctl is-active --quiet service kubelet": exit status 1 (171.587968ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-410124
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-410124: (1.265080268s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410124 --driver=kvm2  --container-runtime=crio
E1115 10:01:56.253836  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-410124 --driver=kvm2  --container-runtime=crio: (53.122621439s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-410124 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-410124 "sudo systemctl is-active --quiet service kubelet": exit status 1 (172.985507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.885661594 start -p stopped-upgrade-998220 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.885661594 start -p stopped-upgrade-998220 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (47.583547095s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.885661594 -p stopped-upgrade-998220 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.885661594 -p stopped-upgrade-998220 stop: (1.702014367s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-998220 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-998220 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.235247502s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.52s)

                                                
                                    
x
+
TestPause/serial/Start (60.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-380517 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-380517 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m0.711243416s)
--- PASS: TestPause/serial/Start (60.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m11.293351419s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-380517 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-380517 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.041929531s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-998220
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-998220: (1.417251376s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m1.649807178s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-547391 "pgrep -a kubelet"
I1115 10:04:31.483100  247445 config.go:182] Loaded profile config "auto-547391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-547391 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mtf9m" [7bbeaaa4-ce4b-4685-ae58-1b06931a109f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mtf9m" [7bbeaaa4-ce4b-4685-ae58-1b06931a109f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00417265s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m44.963743356s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-547391 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-380517 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-380517 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-380517 --output=json --layout=cluster: exit status 2 (276.857089ms)

                                                
                                                
-- stdout --
	{"Name":"pause-380517","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-380517","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-380517 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.97s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.19s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-380517 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-380517 --alsologtostderr -v=5: (1.193761679s)
--- PASS: TestPause/serial/PauseAgain (1.19s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-380517 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-380517 --alsologtostderr -v=5: (1.013537525s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m29.108920186s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m27.681090429s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2bcsb" [7c6f2bbb-3b89-4d2d-9af2-cfcff7be6dbe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005352532s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-547391 "pgrep -a kubelet"
I1115 10:05:38.374982  247445 config.go:182] Loaded profile config "kindnet-547391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-547391 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-547391 replace --force -f testdata/netcat-deployment.yaml: (1.35478595s)
I1115 10:05:40.051767  247445 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-58h7b" [865866af-64ba-4c2a-953b-989180007fb2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-58h7b" [865866af-64ba-4c2a-953b-989180007fb2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004053799s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-547391 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m10.242446868s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-4f7ft" [f039b857-4a16-4591-b4b8-be5d57819d9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004667152s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-547391 "pgrep -a kubelet"
I1115 10:06:24.062460  247445 config.go:182] Loaded profile config "calico-547391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-547391 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-24k59" [7a0322a8-ae23-4f7d-9571-8273f48fd8db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-24k59" [7a0322a8-ae23-4f7d-9571-8273f48fd8db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00384733s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-547391 "pgrep -a kubelet"
I1115 10:06:26.451522  247445 config.go:182] Loaded profile config "enable-default-cni-547391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-547391 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mtw2m" [6a2c2eb6-313d-42b4-bfb4-8a5a7eab266f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mtw2m" [6a2c2eb6-313d-42b4-bfb4-8a5a7eab266f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.245178351s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-547391 "pgrep -a kubelet"
I1115 10:06:26.901719  247445 config.go:182] Loaded profile config "custom-flannel-547391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-547391 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c7nkl" [961d934a-cea6-4d65-af86-0cd2a0b931ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c7nkl" [961d934a-cea6-4d65-af86-0cd2a0b931ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005282086s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-547391 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-547391 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-547391 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-547391 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (59.07672644s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (78.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-013388 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-013388 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m18.35764525s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (78.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (118.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-993257 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-993257 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m58.196874109s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (118.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-c9t46" [2b53be5a-a6c3-4fe3-a9c5-e5f07fcc47f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004814311s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-547391 "pgrep -a kubelet"
I1115 10:07:24.973706  247445 config.go:182] Loaded profile config "flannel-547391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-547391 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rz9x9" [d0d7cede-e5d2-4539-9ec7-ed50059ffc0f] Pending
helpers_test.go:352: "netcat-cd4db9dbf-rz9x9" [d0d7cede-e5d2-4539-9ec7-ed50059ffc0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004618877s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-547391 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-547391 "pgrep -a kubelet"
I1115 10:07:51.626260  247445 config.go:182] Loaded profile config "bridge-547391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-547391 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s6trc" [7754a4a8-4d24-4efa-a505-d5951c57c671] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s6trc" [7754a4a8-4d24-4efa-a505-d5951c57c671] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003826353s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-006860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-006860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (59.009523367s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-547391 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-547391 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-013388 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2a589175-0f4b-40b9-ad77-7fb341ad6dcc] Pending
helpers_test.go:352: "busybox" [2a589175-0f4b-40b9-ad77-7fb341ad6dcc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2a589175-0f4b-40b9-ad77-7fb341ad6dcc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004014419s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-013388 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-649249 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-649249 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.848674601s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-013388 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-013388 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.163888742s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-013388 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-013388 --alsologtostderr -v=3
E1115 10:08:50.488141  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-013388 --alsologtostderr -v=3: (1m26.709801983s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-006860 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c91219fe-adef-4102-9891-37148fa51d3b] Pending
helpers_test.go:352: "busybox" [c91219fe-adef-4102-9891-37148fa51d3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c91219fe-adef-4102-9891-37148fa51d3b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004423246s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-006860 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-993257 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3305c291-9e03-40cf-ac52-0d75e865d686] Pending
helpers_test.go:352: "busybox" [3305c291-9e03-40cf-ac52-0d75e865d686] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3305c291-9e03-40cf-ac52-0d75e865d686] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004028598s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-993257 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-006860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-006860 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (82.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-006860 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-006860 --alsologtostderr -v=3: (1m22.73609218s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (82.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-993257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-993257 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (83.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-993257 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-993257 --alsologtostderr -v=3: (1m23.618090024s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (83.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-649249 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a927156b-78ed-4251-818b-e2320ae0e48b] Pending
helpers_test.go:352: "busybox" [a927156b-78ed-4251-818b-e2320ae0e48b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a927156b-78ed-4251-818b-e2320ae0e48b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004954172s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-649249 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-649249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-649249 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-649249 --alsologtostderr -v=3
E1115 10:09:31.714951  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:31.721393  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:31.733288  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:31.754811  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:31.796653  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:31.878179  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:32.039913  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:32.361822  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:33.003993  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:34.285866  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:36.848155  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:41.969924  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:09:52.212766  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-649249 --alsologtostderr -v=3: (1m23.636536533s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-013388 -n old-k8s-version-013388
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-013388 -n old-k8s-version-013388: exit status 7 (64.146944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-013388 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-013388 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1115 10:10:12.695201  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-013388 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.396449857s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-013388 -n old-k8s-version-013388
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-006860 -n embed-certs-006860
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-006860 -n embed-certs-006860: exit status 7 (77.120081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-006860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-006860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-006860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (44.445768711s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-006860 -n embed-certs-006860
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993257 -n no-preload-993257
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993257 -n no-preload-993257: exit status 7 (74.463489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-993257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (76.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-993257 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:10:32.132339  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:32.138763  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:32.150159  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:32.171580  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:32.213035  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:32.294593  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:32.456138  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:32.777917  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:33.420058  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:34.701935  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:37.264172  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-993257 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.851573348s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-993257 -n no-preload-993257
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (76.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nsp8r" [646e0074-0b44-4e28-a1d7-5ab4913fb36d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1115 10:10:42.385791  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nsp8r" [646e0074-0b44-4e28-a1d7-5ab4913fb36d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004223271s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nsp8r" [646e0074-0b44-4e28-a1d7-5ab4913fb36d] Running
E1115 10:10:52.627484  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:10:53.657604  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004791659s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-013388 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-013388 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-013388 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-013388 --alsologtostderr -v=1: (1.072579861s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-013388 -n old-k8s-version-013388
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-013388 -n old-k8s-version-013388: exit status 2 (255.341146ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-013388 -n old-k8s-version-013388
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-013388 -n old-k8s-version-013388: exit status 2 (255.046899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-013388 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-013388 -n old-k8s-version-013388
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-013388 -n old-k8s-version-013388
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249: exit status 7 (78.73421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-649249 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-649249 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-649249 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (55.784719544s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (67.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-747924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-747924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m7.986336681s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (67.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b8qmx" [429b8a7c-1e0e-4aae-8788-b64dd5bc5a9c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1115 10:11:13.109676  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b8qmx" [429b8a7c-1e0e-4aae-8788-b64dd5bc5a9c] Running
E1115 10:11:17.878502  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:17.884913  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:17.896345  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:17.917735  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:17.959174  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:18.040667  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:18.202570  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:18.524282  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:19.166080  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:20.447682  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004210062s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b8qmx" [429b8a7c-1e0e-4aae-8788-b64dd5bc5a9c] Running
E1115 10:11:23.009098  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:26.679844  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:26.686394  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:26.697882  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:26.719379  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:26.760958  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:26.842481  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.004191  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.266601  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.273049  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.284547  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.306055  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.326573  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.348173  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.429743  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.591377  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.090154283s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-006860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-006860 image list --format=json
E1115 10:11:27.913538  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:27.968961  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-006860 --alsologtostderr -v=1
E1115 10:11:28.130712  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:28.555273  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-006860 --alsologtostderr -v=1: (1.160116325s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-006860 -n embed-certs-006860
E1115 10:11:29.250388  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-006860 -n embed-certs-006860: exit status 2 (312.003565ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-006860 -n embed-certs-006860
E1115 10:11:29.837827  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-006860 -n embed-certs-006860: exit status 2 (451.418564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-006860 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-006860 --alsologtostderr -v=1: (1.103867667s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-006860 -n embed-certs-006860
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-006860 -n embed-certs-006860
E1115 10:11:31.811754  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.22s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.2s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 820bf516181cabed83ba2b27d39e21b2adf01240
iso_test.go:118:   iso_version: v1.37.0-1762018871-21834
iso_test.go:118:   kicbase_version: v0.0.48-1760939008-21773
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.20s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.21s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-620749 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.21s)
E1115 10:11:36.933735  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:37.521639  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:38.372737  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:39.319156  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pwj9b" [d83d96dd-b265-4ef2-8901-7f6c5b15d4ed] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1115 10:11:47.175393  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:47.763726  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pwj9b" [d83d96dd-b265-4ef2-8901-7f6c5b15d4ed] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.00568403s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jh7mp" [80220610-fe80-4213-bc83-d30f95b81d80] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jh7mp" [80220610-fe80-4213-bc83-d30f95b81d80] Running
E1115 10:11:54.071892  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/kindnet-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:11:56.249566  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/functional-471384/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.00430441s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pwj9b" [d83d96dd-b265-4ef2-8901-7f6c5b15d4ed] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004357005s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-993257 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jh7mp" [80220610-fe80-4213-bc83-d30f95b81d80] Running
E1115 10:11:58.854968  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004867727s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-649249 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-993257 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-993257 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-993257 -n no-preload-993257
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-993257 -n no-preload-993257: exit status 2 (254.68605ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-993257 -n no-preload-993257
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-993257 -n no-preload-993257: exit status 2 (264.138637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-993257 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-993257 -n no-preload-993257
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-993257 -n no-preload-993257
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-649249 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-649249 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-649249 --alsologtostderr -v=1: (1.002811883s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249: exit status 2 (229.345141ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249: exit status 2 (255.810186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-649249 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-649249 --alsologtostderr -v=1: (1.048803895s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-649249 -n default-k8s-diff-port-649249
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-747924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1115 10:12:07.657241  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-747924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.027956878s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-747924 --alsologtostderr -v=3
E1115 10:12:15.579112  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/auto-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:18.767386  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:18.773870  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:18.785261  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:18.806828  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:18.848364  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:18.929911  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:19.091531  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:19.413224  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:20.055305  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-747924 --alsologtostderr -v=3: (12.729492915s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-747924 -n newest-cni-747924
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-747924 -n newest-cni-747924: exit status 7 (60.588264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-747924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-747924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:12:21.337156  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:23.899540  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:29.021341  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:39.262618  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:39.816510  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/calico-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:48.618909  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/enable-default-cni-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:49.207288  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/custom-flannel-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:51.887911  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:51.894304  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:51.905714  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:51.927243  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:51.968649  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:52.050563  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:52.212113  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:52.533749  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:53.176001  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:12:54.457856  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-747924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (33.481794512s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-747924 -n newest-cni-747924
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-747924 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-747924 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-747924 --alsologtostderr -v=1: (1.226071385s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-747924 -n newest-cni-747924
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-747924 -n newest-cni-747924: exit status 2 (295.146458ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-747924 -n newest-cni-747924
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-747924 -n newest-cni-747924: exit status 2 (259.915093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-747924 --alsologtostderr -v=1
E1115 10:12:57.019806  247445 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/bridge-547391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-747924 -n newest-cni-747924
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-747924 -n newest-cni-747924
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                    

Test skip (40/351)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 5.29
267 TestNetworkPlugins/group/cilium 4
296 TestStartStop/group/disable-driver-mounts 0.23
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-663794 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-547391 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-547391" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-547391

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547391"

                                                
                                                
----------------------- debugLogs end: kubenet-547391 [took: 5.091690004s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-547391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-547391
--- SKIP: TestNetworkPlugins/group/kubenet (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-547391 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-547391" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-547391

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-547391" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547391"

                                                
                                                
----------------------- debugLogs end: cilium-547391 [took: 3.818616765s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-547391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-547391
--- SKIP: TestNetworkPlugins/group/cilium (4.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-457207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-457207
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard