Test Report: KVM_Linux_crio 21974

                    
                      4cf3e568bd19aa010164d0f2afa2e28844e6f351:2025-11-26:42526
                    
                

Test fail (2/351)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.06
244 TestPreload 150.37
x
+
TestAddons/parallel/Ingress (158.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-198878 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-198878 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-198878 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [cb70cb39-5ff1-4d2b-b014-86048256ca26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [cb70cb39-5ff1-4d2b-b014-86048256ca26] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003302496s
I1126 19:38:09.144494   11003 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-198878 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.829194068s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-198878 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.123
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-198878 -n addons-198878
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 logs -n 25: (1.377353881s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-499024                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-499024 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ 26 Nov 25 19:35 UTC │
	│ start   │ --download-only -p binary-mirror-630783 --alsologtostderr --binary-mirror http://127.0.0.1:33899 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-630783 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │                     │
	│ delete  │ -p binary-mirror-630783                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-630783 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ 26 Nov 25 19:35 UTC │
	│ addons  │ enable dashboard -p addons-198878                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │                     │
	│ addons  │ disable dashboard -p addons-198878                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │                     │
	│ start   │ -p addons-198878 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-198878 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-198878 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ enable headlamp -p addons-198878 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-198878 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ ssh     │ addons-198878 ssh cat /opt/local-path-provisioner/pvc-a22e263c-d92b-4e58-83ac-82f62be484b9_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-198878 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-198878 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-198878 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:38 UTC │
	│ addons  │ addons-198878 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ ip      │ addons-198878 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-198878 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
	│ addons  │ addons-198878 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-198878                                                                                                                                                                                                                                                                                                                                                                                         │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
	│ addons  │ addons-198878 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
	│ addons  │ addons-198878 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
	│ ssh     │ addons-198878 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │                     │
	│ addons  │ addons-198878 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
	│ addons  │ addons-198878 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
	│ ip      │ addons-198878 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-198878        │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │ 26 Nov 25 19:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:35:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:35:08.349724   11611 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:35:08.349923   11611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:35:08.349931   11611 out.go:374] Setting ErrFile to fd 2...
	I1126 19:35:08.349936   11611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:35:08.350142   11611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 19:35:08.350593   11611 out.go:368] Setting JSON to false
	I1126 19:35:08.351364   11611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1058,"bootTime":1764184650,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:35:08.351411   11611 start.go:143] virtualization: kvm guest
	I1126 19:35:08.353165   11611 out.go:179] * [addons-198878] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 19:35:08.354407   11611 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:35:08.354426   11611 notify.go:221] Checking for updates...
	I1126 19:35:08.356651   11611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:35:08.357904   11611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 19:35:08.359170   11611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 19:35:08.360267   11611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 19:35:08.361509   11611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:35:08.362885   11611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:35:08.394162   11611 out.go:179] * Using the kvm2 driver based on user configuration
	I1126 19:35:08.395419   11611 start.go:309] selected driver: kvm2
	I1126 19:35:08.395443   11611 start.go:927] validating driver "kvm2" against <nil>
	I1126 19:35:08.395455   11611 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:35:08.396129   11611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:35:08.396818   11611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:35:08.396847   11611 cni.go:84] Creating CNI manager for ""
	I1126 19:35:08.396895   11611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1126 19:35:08.396908   11611 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1126 19:35:08.396947   11611 start.go:353] cluster config:
	{Name:addons-198878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1126 19:35:08.397092   11611 iso.go:125] acquiring lock: {Name:mkfe3dbb7c1a56d5a5080a4e71d079899ad19ff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 19:35:08.398596   11611 out.go:179] * Starting "addons-198878" primary control-plane node in "addons-198878" cluster
	I1126 19:35:08.399754   11611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:35:08.399777   11611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 19:35:08.399783   11611 cache.go:65] Caching tarball of preloaded images
	I1126 19:35:08.399845   11611 preload.go:238] Found /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 19:35:08.399855   11611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 19:35:08.400171   11611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/config.json ...
	I1126 19:35:08.400194   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/config.json: {Name:mke50fba2276487ff37a4cbe33afee7969a252fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:08.400346   11611 start.go:360] acquireMachinesLock for addons-198878: {Name:mk682108a3404f6d853d2e6b676abccdb6a57902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1126 19:35:08.400415   11611 start.go:364] duration metric: took 52.23µs to acquireMachinesLock for "addons-198878"
	I1126 19:35:08.400439   11611 start.go:93] Provisioning new machine with config: &{Name:addons-198878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:35:08.400485   11611 start.go:125] createHost starting for "" (driver="kvm2")
	I1126 19:35:08.402079   11611 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1126 19:35:08.402259   11611 start.go:159] libmachine.API.Create for "addons-198878" (driver="kvm2")
	I1126 19:35:08.402294   11611 client.go:173] LocalClient.Create starting
	I1126 19:35:08.402394   11611 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem
	I1126 19:35:08.503600   11611 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem
	I1126 19:35:08.575826   11611 main.go:143] libmachine: creating domain...
	I1126 19:35:08.575849   11611 main.go:143] libmachine: creating network...
	I1126 19:35:08.577227   11611 main.go:143] libmachine: found existing default network
	I1126 19:35:08.577413   11611 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1126 19:35:08.577942   11611 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c035e0}
	I1126 19:35:08.578050   11611 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-198878</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1126 19:35:08.583875   11611 main.go:143] libmachine: creating private network mk-addons-198878 192.168.39.0/24...
	I1126 19:35:08.653847   11611 main.go:143] libmachine: private network mk-addons-198878 192.168.39.0/24 created
	I1126 19:35:08.654176   11611 main.go:143] libmachine: <network>
	  <name>mk-addons-198878</name>
	  <uuid>814dc8f9-7f03-4085-9b4b-191d5f733f4b</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:96:2f:0c'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1126 19:35:08.654211   11611 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878 ...
	I1126 19:35:08.654229   11611 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21974-7091/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1126 19:35:08.654238   11611 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 19:35:08.654294   11611 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21974-7091/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21974-7091/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1126 19:35:08.910353   11611 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa...
	I1126 19:35:09.043638   11611 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/addons-198878.rawdisk...
	I1126 19:35:09.043677   11611 main.go:143] libmachine: Writing magic tar header
	I1126 19:35:09.043696   11611 main.go:143] libmachine: Writing SSH key tar header
	I1126 19:35:09.043772   11611 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878 ...
	I1126 19:35:09.043826   11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878
	I1126 19:35:09.043856   11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878 (perms=drwx------)
	I1126 19:35:09.043872   11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21974-7091/.minikube/machines
	I1126 19:35:09.043881   11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21974-7091/.minikube/machines (perms=drwxr-xr-x)
	I1126 19:35:09.043891   11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 19:35:09.043902   11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21974-7091/.minikube (perms=drwxr-xr-x)
	I1126 19:35:09.043910   11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21974-7091
	I1126 19:35:09.043924   11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21974-7091 (perms=drwxrwxr-x)
	I1126 19:35:09.043934   11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1126 19:35:09.043941   11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1126 19:35:09.043953   11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1126 19:35:09.043960   11611 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1126 19:35:09.043971   11611 main.go:143] libmachine: checking permissions on dir: /home
	I1126 19:35:09.043977   11611 main.go:143] libmachine: skipping /home - not owner
	I1126 19:35:09.043981   11611 main.go:143] libmachine: defining domain...
	I1126 19:35:09.045340   11611 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-198878</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/addons-198878.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-198878'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1126 19:35:09.053280   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:7c:fe:25 in network default
	I1126 19:35:09.053994   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:09.054014   11611 main.go:143] libmachine: starting domain...
	I1126 19:35:09.054018   11611 main.go:143] libmachine: ensuring networks are active...
	I1126 19:35:09.054901   11611 main.go:143] libmachine: Ensuring network default is active
	I1126 19:35:09.055352   11611 main.go:143] libmachine: Ensuring network mk-addons-198878 is active
	I1126 19:35:09.055972   11611 main.go:143] libmachine: getting domain XML...
	I1126 19:35:09.056938   11611 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-198878</name>
	  <uuid>3a31c91d-5706-460a-9959-5cc9b1ab6144</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/addons-198878.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:39:0c:6e'/>
	      <source network='mk-addons-198878'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7c:fe:25'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1126 19:35:10.341247   11611 main.go:143] libmachine: waiting for domain to start...
	I1126 19:35:10.342782   11611 main.go:143] libmachine: domain is now running
	I1126 19:35:10.342801   11611 main.go:143] libmachine: waiting for IP...
	I1126 19:35:10.343514   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:10.344033   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:10.344044   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:10.344324   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:10.344361   11611 retry.go:31] will retry after 266.829865ms: waiting for domain to come up
	I1126 19:35:10.612957   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:10.613557   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:10.613571   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:10.613866   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:10.613920   11611 retry.go:31] will retry after 336.441283ms: waiting for domain to come up
	I1126 19:35:10.951753   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:10.952376   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:10.952398   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:10.952691   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:10.952721   11611 retry.go:31] will retry after 322.116478ms: waiting for domain to come up
	I1126 19:35:11.276471   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:11.277110   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:11.277130   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:11.277459   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:11.277501   11611 retry.go:31] will retry after 473.430506ms: waiting for domain to come up
	I1126 19:35:11.752063   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:11.752553   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:11.752570   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:11.752856   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:11.752890   11611 retry.go:31] will retry after 744.319165ms: waiting for domain to come up
	I1126 19:35:12.498775   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:12.499302   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:12.499318   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:12.499634   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:12.499662   11611 retry.go:31] will retry after 878.2162ms: waiting for domain to come up
	I1126 19:35:13.379060   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:13.379618   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:13.379638   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:13.380041   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:13.380104   11611 retry.go:31] will retry after 804.696615ms: waiting for domain to come up
	I1126 19:35:14.185922   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:14.186436   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:14.186454   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:14.186793   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:14.186829   11611 retry.go:31] will retry after 1.418235708s: waiting for domain to come up
	I1126 19:35:15.606226   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:15.606752   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:15.606784   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:15.607186   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:15.607221   11611 retry.go:31] will retry after 1.574841792s: waiting for domain to come up
	I1126 19:35:17.184011   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:17.184520   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:17.184533   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:17.184852   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:17.184881   11611 retry.go:31] will retry after 1.833984055s: waiting for domain to come up
	I1126 19:35:19.020196   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:19.020728   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:19.020744   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:19.021112   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:19.021148   11611 retry.go:31] will retry after 2.745043916s: waiting for domain to come up
	I1126 19:35:21.770218   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:21.770828   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:21.770848   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:21.771186   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:21.771225   11611 retry.go:31] will retry after 2.194652937s: waiting for domain to come up
	I1126 19:35:23.967573   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:23.968013   11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
	I1126 19:35:23.968027   11611 main.go:143] libmachine: trying to list again with source=arp
	I1126 19:35:23.968254   11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
	I1126 19:35:23.968281   11611 retry.go:31] will retry after 3.679292601s: waiting for domain to come up
	I1126 19:35:27.652134   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:27.652711   11611 main.go:143] libmachine: domain addons-198878 has current primary IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:27.652725   11611 main.go:143] libmachine: found domain IP: 192.168.39.123
	I1126 19:35:27.652731   11611 main.go:143] libmachine: reserving static IP address...
	I1126 19:35:27.653225   11611 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-198878", mac: "52:54:00:39:0c:6e", ip: "192.168.39.123"} in network mk-addons-198878
	I1126 19:35:27.845227   11611 main.go:143] libmachine: reserved static IP address 192.168.39.123 for domain addons-198878
	I1126 19:35:27.845253   11611 main.go:143] libmachine: waiting for SSH...
	I1126 19:35:27.845271   11611 main.go:143] libmachine: Getting to WaitForSSH function...
	I1126 19:35:27.847765   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:27.848065   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:27.848134   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:27.848318   11611 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:27.848571   11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1126 19:35:27.848583   11611 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1126 19:35:27.968887   11611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 19:35:27.969281   11611 main.go:143] libmachine: domain creation complete
	I1126 19:35:27.970743   11611 machine.go:94] provisionDockerMachine start ...
	I1126 19:35:27.973294   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:27.973696   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:27.973726   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:27.973903   11611 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:27.974170   11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1126 19:35:27.974182   11611 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 19:35:28.094548   11611 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1126 19:35:28.094578   11611 buildroot.go:166] provisioning hostname "addons-198878"
	I1126 19:35:28.097497   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.097952   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:28.097972   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.098140   11611 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:28.098327   11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1126 19:35:28.098340   11611 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-198878 && echo "addons-198878" | sudo tee /etc/hostname
	I1126 19:35:28.237328   11611 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-198878
	
	I1126 19:35:28.240263   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.240717   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:28.240741   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.240871   11611 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:28.241057   11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1126 19:35:28.241073   11611 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-198878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-198878/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-198878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 19:35:28.370219   11611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 19:35:28.370246   11611 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21974-7091/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-7091/.minikube}
	I1126 19:35:28.370261   11611 buildroot.go:174] setting up certificates
	I1126 19:35:28.370270   11611 provision.go:84] configureAuth start
	I1126 19:35:28.373211   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.373577   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:28.373616   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.375695   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.376060   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:28.376105   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.376229   11611 provision.go:143] copyHostCerts
	I1126 19:35:28.376301   11611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/ca.pem (1082 bytes)
	I1126 19:35:28.424072   11611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/cert.pem (1123 bytes)
	I1126 19:35:28.424262   11611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/key.pem (1675 bytes)
	I1126 19:35:28.424343   11611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem org=jenkins.addons-198878 san=[127.0.0.1 192.168.39.123 addons-198878 localhost minikube]
	I1126 19:35:28.470104   11611 provision.go:177] copyRemoteCerts
	I1126 19:35:28.470169   11611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 19:35:28.472606   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.472945   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:28.472965   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.473106   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:28.564818   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 19:35:28.596338   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1126 19:35:28.626433   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 19:35:28.657607   11611 provision.go:87] duration metric: took 287.301255ms to configureAuth
	I1126 19:35:28.657641   11611 buildroot.go:189] setting minikube options for container-runtime
	I1126 19:35:28.657823   11611 config.go:182] Loaded profile config "addons-198878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:28.660409   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.660807   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:28.660830   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:28.660977   11611 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:28.661175   11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1126 19:35:28.661189   11611 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 19:35:29.094992   11611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 19:35:29.095014   11611 machine.go:97] duration metric: took 1.124253596s to provisionDockerMachine
	I1126 19:35:29.095026   11611 client.go:176] duration metric: took 20.692722921s to LocalClient.Create
	I1126 19:35:29.095036   11611 start.go:167] duration metric: took 20.69277747s to libmachine.API.Create "addons-198878"
	I1126 19:35:29.095042   11611 start.go:293] postStartSetup for "addons-198878" (driver="kvm2")
	I1126 19:35:29.095050   11611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 19:35:29.095143   11611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 19:35:29.098441   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.098859   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:29.098883   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.099043   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:29.197291   11611 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 19:35:29.203460   11611 info.go:137] Remote host: Buildroot 2025.02
	I1126 19:35:29.203491   11611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-7091/.minikube/addons for local assets ...
	I1126 19:35:29.203573   11611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-7091/.minikube/files for local assets ...
	I1126 19:35:29.203609   11611 start.go:296] duration metric: took 108.560809ms for postStartSetup
	I1126 19:35:29.263806   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.264316   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:29.264349   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.264567   11611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/config.json ...
	I1126 19:35:29.264746   11611 start.go:128] duration metric: took 20.864251471s to createHost
	I1126 19:35:29.266967   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.267341   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:29.267364   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.267500   11611 main.go:143] libmachine: Using SSH client type: native
	I1126 19:35:29.267698   11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1126 19:35:29.267708   11611 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1126 19:35:29.385893   11611 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764185729.349930091
	
	I1126 19:35:29.385914   11611 fix.go:216] guest clock: 1764185729.349930091
	I1126 19:35:29.385924   11611 fix.go:229] Guest: 2025-11-26 19:35:29.349930091 +0000 UTC Remote: 2025-11-26 19:35:29.264757105 +0000 UTC m=+20.963207688 (delta=85.172986ms)
	I1126 19:35:29.385942   11611 fix.go:200] guest clock delta is within tolerance: 85.172986ms
	I1126 19:35:29.385956   11611 start.go:83] releasing machines lock for "addons-198878", held for 20.985528157s
	I1126 19:35:29.388880   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.389353   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:29.389381   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.390073   11611 ssh_runner.go:195] Run: cat /version.json
	I1126 19:35:29.390134   11611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 19:35:29.392899   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.393147   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.393321   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:29.393354   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.393511   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:29.393521   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:29.393537   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:29.393760   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:29.501861   11611 ssh_runner.go:195] Run: systemctl --version
	I1126 19:35:29.508847   11611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 19:35:30.035730   11611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 19:35:30.043466   11611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 19:35:30.043540   11611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 19:35:30.067861   11611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 19:35:30.067890   11611 start.go:496] detecting cgroup driver to use...
	I1126 19:35:30.067982   11611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 19:35:30.088718   11611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 19:35:30.107579   11611 docker.go:218] disabling cri-docker service (if available) ...
	I1126 19:35:30.107634   11611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 19:35:30.125710   11611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 19:35:30.142753   11611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 19:35:30.290410   11611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 19:35:30.504760   11611 docker.go:234] disabling docker service ...
	I1126 19:35:30.504841   11611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 19:35:30.522212   11611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 19:35:30.538584   11611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 19:35:30.701383   11611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 19:35:30.852017   11611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 19:35:30.869602   11611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 19:35:30.894746   11611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 19:35:30.894821   11611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:30.908043   11611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 19:35:30.908126   11611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:30.921031   11611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:30.933588   11611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:30.945992   11611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 19:35:30.959338   11611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:30.971942   11611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:30.994077   11611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:35:31.006970   11611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 19:35:31.018306   11611 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1126 19:35:31.018385   11611 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1126 19:35:31.040637   11611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 19:35:31.052793   11611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:31.197786   11611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 19:35:31.320545   11611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 19:35:31.320645   11611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 19:35:31.326386   11611 start.go:564] Will wait 60s for crictl version
	I1126 19:35:31.326469   11611 ssh_runner.go:195] Run: which crictl
	I1126 19:35:31.331262   11611 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1126 19:35:31.368975   11611 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1126 19:35:31.369117   11611 ssh_runner.go:195] Run: crio --version
	I1126 19:35:31.400593   11611 ssh_runner.go:195] Run: crio --version
	I1126 19:35:31.432930   11611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1126 19:35:31.437132   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:31.437582   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:31.437610   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:31.437808   11611 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1126 19:35:31.442987   11611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:35:31.459387   11611 kubeadm.go:884] updating cluster {Name:addons-198878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 19:35:31.459522   11611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:35:31.459571   11611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:35:31.489382   11611 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1126 19:35:31.489447   11611 ssh_runner.go:195] Run: which lz4
	I1126 19:35:31.494017   11611 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1126 19:35:31.499052   11611 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1126 19:35:31.499107   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1126 19:35:33.028314   11611 crio.go:462] duration metric: took 1.534339111s to copy over tarball
	I1126 19:35:33.028391   11611 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1126 19:35:34.704752   11611 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.676328192s)
	I1126 19:35:34.704784   11611 crio.go:469] duration metric: took 1.676441228s to extract the tarball
	I1126 19:35:34.704791   11611 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1126 19:35:34.747070   11611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:35:34.788943   11611 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:35:34.788978   11611 cache_images.go:86] Images are preloaded, skipping loading
	I1126 19:35:34.788986   11611 kubeadm.go:935] updating node { 192.168.39.123 8443 v1.34.1 crio true true} ...
	I1126 19:35:34.789068   11611 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-198878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 19:35:34.789175   11611 ssh_runner.go:195] Run: crio config
	I1126 19:35:34.839584   11611 cni.go:84] Creating CNI manager for ""
	I1126 19:35:34.839611   11611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1126 19:35:34.839626   11611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 19:35:34.839648   11611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-198878 NodeName:addons-198878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 19:35:34.839801   11611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-198878"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 19:35:34.839883   11611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 19:35:34.853716   11611 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 19:35:34.853779   11611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 19:35:34.866800   11611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1126 19:35:34.889682   11611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 19:35:34.913069   11611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1126 19:35:34.934736   11611 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I1126 19:35:34.939316   11611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:35:34.954543   11611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:35.095406   11611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:35:35.115829   11611 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878 for IP: 192.168.39.123
	I1126 19:35:35.115856   11611 certs.go:195] generating shared ca certs ...
	I1126 19:35:35.115874   11611 certs.go:227] acquiring lock for ca certs: {Name:mkec6f6093be68a4f0c7d5c64487ef4e93539f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.116055   11611 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key
	I1126 19:35:35.204411   11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt ...
	I1126 19:35:35.204436   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt: {Name:mk5f1dcbeee7ab35dcd334ff3481a2f84c9aae3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.204608   11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key ...
	I1126 19:35:35.204620   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key: {Name:mk6e0da3cd29b80eaa0b1f079dd9ca7c333201a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.204696   11611 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key
	I1126 19:35:35.233957   11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.crt ...
	I1126 19:35:35.233978   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.crt: {Name:mk6714651c1858f3eb22cb38368f74c902776653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.234126   11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key ...
	I1126 19:35:35.234138   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key: {Name:mkea1a7fc500916b8dad6ebcedb9a4fa5d67c756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.234206   11611 certs.go:257] generating profile certs ...
	I1126 19:35:35.234262   11611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.key
	I1126 19:35:35.234276   11611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt with IP's: []
	I1126 19:35:35.380413   11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt ...
	I1126 19:35:35.380439   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: {Name:mkb52c346045c9a0090ac970d54ac6fa85cdde36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.380608   11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.key ...
	I1126 19:35:35.380620   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.key: {Name:mk784ea1579a9da0c782da8a4e28ad4db5f4266c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.380688   11611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key.44518eaa
	I1126 19:35:35.380706   11611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt.44518eaa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.123]
	I1126 19:35:35.586774   11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt.44518eaa ...
	I1126 19:35:35.586802   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt.44518eaa: {Name:mk162bf0c4de5afeaf80a5b426d47e902280785f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.586970   11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key.44518eaa ...
	I1126 19:35:35.586983   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key.44518eaa: {Name:mk9ed0078f49c125458146cd027e59bb8d8c13ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.587058   11611 certs.go:382] copying /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt.44518eaa -> /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt
	I1126 19:35:35.587144   11611 certs.go:386] copying /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key.44518eaa -> /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key
	I1126 19:35:35.587192   11611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.key
	I1126 19:35:35.587209   11611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.crt with IP's: []
	I1126 19:35:35.860895   11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.crt ...
	I1126 19:35:35.860925   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.crt: {Name:mk60dcfb55bfefc30302229b7eb301ddc6fb74c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.861093   11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.key ...
	I1126 19:35:35.861105   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.key: {Name:mk2ccb9d1359cd7942c01a64df7132791ff28560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:35.861271   11611 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 19:35:35.861306   11611 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem (1082 bytes)
	I1126 19:35:35.861331   11611 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem (1123 bytes)
	I1126 19:35:35.861354   11611 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem (1675 bytes)
	I1126 19:35:35.861857   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 19:35:35.902819   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 19:35:35.935751   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 19:35:35.970203   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 19:35:36.001465   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 19:35:36.033913   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 19:35:36.064782   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 19:35:36.095354   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 19:35:36.125193   11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 19:35:36.155831   11611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 19:35:36.180797   11611 ssh_runner.go:195] Run: openssl version
	I1126 19:35:36.188104   11611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 19:35:36.203070   11611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:36.208723   11611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:36.208774   11611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:35:36.216421   11611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 19:35:36.230848   11611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 19:35:36.235943   11611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 19:35:36.236007   11611 kubeadm.go:401] StartCluster: {Name:addons-198878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:35:36.236063   11611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:35:36.236142   11611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:35:36.272535   11611 cri.go:89] found id: ""
	I1126 19:35:36.272611   11611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 19:35:36.285627   11611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 19:35:36.298974   11611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 19:35:36.313456   11611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 19:35:36.313475   11611 kubeadm.go:158] found existing configuration files:
	
	I1126 19:35:36.313517   11611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 19:35:36.325356   11611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 19:35:36.325409   11611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 19:35:36.337710   11611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 19:35:36.349411   11611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 19:35:36.349474   11611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 19:35:36.361980   11611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 19:35:36.373752   11611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 19:35:36.373823   11611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 19:35:36.386171   11611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 19:35:36.397344   11611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 19:35:36.397410   11611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 19:35:36.409466   11611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1126 19:35:36.579153   11611 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 19:35:48.895219   11611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 19:35:48.895300   11611 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 19:35:48.895409   11611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 19:35:48.895526   11611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 19:35:48.895613   11611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 19:35:48.895668   11611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 19:35:48.897270   11611 out.go:252]   - Generating certificates and keys ...
	I1126 19:35:48.897343   11611 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 19:35:48.897408   11611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 19:35:48.897502   11611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 19:35:48.897588   11611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 19:35:48.897686   11611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 19:35:48.897762   11611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 19:35:48.897848   11611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 19:35:48.898006   11611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-198878 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I1126 19:35:48.898104   11611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 19:35:48.898266   11611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-198878 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I1126 19:35:48.898365   11611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 19:35:48.898467   11611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 19:35:48.898535   11611 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 19:35:48.898585   11611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 19:35:48.898648   11611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 19:35:48.898723   11611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 19:35:48.898791   11611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 19:35:48.898853   11611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 19:35:48.898901   11611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 19:35:48.898990   11611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 19:35:48.899048   11611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 19:35:48.900406   11611 out.go:252]   - Booting up control plane ...
	I1126 19:35:48.900498   11611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 19:35:48.900571   11611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 19:35:48.900636   11611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 19:35:48.900728   11611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 19:35:48.900816   11611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 19:35:48.900948   11611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 19:35:48.901064   11611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 19:35:48.901186   11611 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 19:35:48.901348   11611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 19:35:48.901458   11611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 19:35:48.901515   11611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002174283s
	I1126 19:35:48.901588   11611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 19:35:48.901683   11611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.123:8443/livez
	I1126 19:35:48.901770   11611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 19:35:48.901834   11611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 19:35:48.901898   11611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.447789772s
	I1126 19:35:48.901956   11611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.488422193s
	I1126 19:35:48.902016   11611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501997583s
	I1126 19:35:48.902137   11611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 19:35:48.902273   11611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 19:35:48.902393   11611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 19:35:48.902597   11611 kubeadm.go:319] [mark-control-plane] Marking the node addons-198878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 19:35:48.902678   11611 kubeadm.go:319] [bootstrap-token] Using token: xo527n.pd0o97bdcnwf3821
	I1126 19:35:48.904158   11611 out.go:252]   - Configuring RBAC rules ...
	I1126 19:35:48.904274   11611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 19:35:48.904378   11611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 19:35:48.904523   11611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 19:35:48.904698   11611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 19:35:48.904802   11611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 19:35:48.904873   11611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 19:35:48.904981   11611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 19:35:48.905036   11611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 19:35:48.905073   11611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 19:35:48.905092   11611 kubeadm.go:319] 
	I1126 19:35:48.905142   11611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 19:35:48.905148   11611 kubeadm.go:319] 
	I1126 19:35:48.905228   11611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 19:35:48.905238   11611 kubeadm.go:319] 
	I1126 19:35:48.905270   11611 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 19:35:48.905333   11611 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 19:35:48.905375   11611 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 19:35:48.905381   11611 kubeadm.go:319] 
	I1126 19:35:48.905423   11611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 19:35:48.905430   11611 kubeadm.go:319] 
	I1126 19:35:48.905465   11611 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 19:35:48.905475   11611 kubeadm.go:319] 
	I1126 19:35:48.905515   11611 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 19:35:48.905575   11611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 19:35:48.905655   11611 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 19:35:48.905673   11611 kubeadm.go:319] 
	I1126 19:35:48.905751   11611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 19:35:48.905820   11611 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 19:35:48.905826   11611 kubeadm.go:319] 
	I1126 19:35:48.905895   11611 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xo527n.pd0o97bdcnwf3821 \
	I1126 19:35:48.906004   11611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c9a146404250e477d139e5ac0d4339741eaa7ea23ba8a3e74d2181ed46faf684 \
	I1126 19:35:48.906024   11611 kubeadm.go:319] 	--control-plane 
	I1126 19:35:48.906031   11611 kubeadm.go:319] 
	I1126 19:35:48.906124   11611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 19:35:48.906131   11611 kubeadm.go:319] 
	I1126 19:35:48.906223   11611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xo527n.pd0o97bdcnwf3821 \
	I1126 19:35:48.906367   11611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c9a146404250e477d139e5ac0d4339741eaa7ea23ba8a3e74d2181ed46faf684 
	I1126 19:35:48.906386   11611 cni.go:84] Creating CNI manager for ""
	I1126 19:35:48.906396   11611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1126 19:35:48.908011   11611 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1126 19:35:48.909325   11611 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1126 19:35:48.925100   11611 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1126 19:35:48.952333   11611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 19:35:48.952401   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:48.952455   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-198878 minikube.k8s.io/updated_at=2025_11_26T19_35_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=addons-198878 minikube.k8s.io/primary=true
	I1126 19:35:49.126064   11611 ops.go:34] apiserver oom_adj: -16
	I1126 19:35:49.126162   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:49.626439   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:50.126309   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:50.626908   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:51.126854   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:51.626393   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:52.127145   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:52.626651   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:53.126599   11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:35:53.224995   11611 kubeadm.go:1114] duration metric: took 4.272643611s to wait for elevateKubeSystemPrivileges
	I1126 19:35:53.225027   11611 kubeadm.go:403] duration metric: took 16.989023109s to StartCluster
	I1126 19:35:53.225042   11611 settings.go:142] acquiring lock: {Name:mk37c98b12b8a7193cfde69315430fb7cd818f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:53.225194   11611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 19:35:53.225637   11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/kubeconfig: {Name:mk17b8b187372462ddf3f30b5296315dcdc9fda2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:53.225851   11611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 19:35:53.225892   11611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:35:53.226013   11611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1126 19:35:53.226121   11611 config.go:182] Loaded profile config "addons-198878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:53.226129   11611 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-198878"
	I1126 19:35:53.226146   11611 addons.go:70] Setting gcp-auth=true in profile "addons-198878"
	I1126 19:35:53.226164   11611 mustload.go:66] Loading cluster: addons-198878
	I1126 19:35:53.226178   11611 addons.go:70] Setting inspektor-gadget=true in profile "addons-198878"
	I1126 19:35:53.226163   11611 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-198878"
	I1126 19:35:53.226199   11611 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-198878"
	I1126 19:35:53.226202   11611 addons.go:70] Setting ingress-dns=true in profile "addons-198878"
	I1126 19:35:53.226195   11611 addons.go:70] Setting ingress=true in profile "addons-198878"
	I1126 19:35:53.226210   11611 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-198878"
	I1126 19:35:53.226224   11611 addons.go:239] Setting addon ingress-dns=true in "addons-198878"
	I1126 19:35:53.226293   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226126   11611 addons.go:70] Setting yakd=true in profile "addons-198878"
	I1126 19:35:53.226201   11611 addons.go:70] Setting metrics-server=true in profile "addons-198878"
	I1126 19:35:53.226327   11611 addons.go:239] Setting addon yakd=true in "addons-198878"
	I1126 19:35:53.226335   11611 addons.go:239] Setting addon metrics-server=true in "addons-198878"
	I1126 19:35:53.226345   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226347   11611 config.go:182] Loaded profile config "addons-198878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:35:53.226371   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226172   11611 addons.go:70] Setting registry=true in profile "addons-198878"
	I1126 19:35:53.226512   11611 addons.go:239] Setting addon registry=true in "addons-198878"
	I1126 19:35:53.226538   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226137   11611 addons.go:70] Setting default-storageclass=true in profile "addons-198878"
	I1126 19:35:53.226678   11611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-198878"
	I1126 19:35:53.226193   11611 addons.go:239] Setting addon inspektor-gadget=true in "addons-198878"
	I1126 19:35:53.226850   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226211   11611 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-198878"
	I1126 19:35:53.227498   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226222   11611 addons.go:70] Setting cloud-spanner=true in profile "addons-198878"
	I1126 19:35:53.227736   11611 addons.go:239] Setting addon cloud-spanner=true in "addons-198878"
	I1126 19:35:53.227763   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226220   11611 addons.go:239] Setting addon ingress=true in "addons-198878"
	I1126 19:35:53.227838   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226230   11611 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-198878"
	I1126 19:35:53.227899   11611 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-198878"
	I1126 19:35:53.226230   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226233   11611 addons.go:70] Setting registry-creds=true in profile "addons-198878"
	I1126 19:35:53.228530   11611 addons.go:239] Setting addon registry-creds=true in "addons-198878"
	I1126 19:35:53.228557   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226236   11611 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-198878"
	I1126 19:35:53.228650   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.226241   11611 addons.go:70] Setting volcano=true in profile "addons-198878"
	I1126 19:35:53.228862   11611 addons.go:239] Setting addon volcano=true in "addons-198878"
	I1126 19:35:53.228888   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.228951   11611 out.go:179] * Verifying Kubernetes components...
	I1126 19:35:53.226244   11611 addons.go:70] Setting volumesnapshots=true in profile "addons-198878"
	I1126 19:35:53.226239   11611 addons.go:70] Setting storage-provisioner=true in profile "addons-198878"
	I1126 19:35:53.229364   11611 addons.go:239] Setting addon volumesnapshots=true in "addons-198878"
	I1126 19:35:53.229486   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.229373   11611 addons.go:239] Setting addon storage-provisioner=true in "addons-198878"
	I1126 19:35:53.229570   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.230406   11611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:35:53.233030   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.233958   11611 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1126 19:35:53.235423   11611 addons.go:239] Setting addon default-storageclass=true in "addons-198878"
	I1126 19:35:53.235457   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.236147   11611 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1126 19:35:53.236154   11611 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1126 19:35:53.236474   11611 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-198878"
	I1126 19:35:53.236511   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:35:53.237006   11611 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1126 19:35:53.237030   11611 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1126 19:35:53.237021   11611 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1126 19:35:53.237103   11611 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1126 19:35:53.237832   11611 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	W1126 19:35:53.237628   11611 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1126 19:35:53.238200   11611 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1126 19:35:53.238206   11611 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1126 19:35:53.238225   11611 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1126 19:35:53.238244   11611 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1126 19:35:53.238255   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1126 19:35:53.238271   11611 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:35:53.238285   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1126 19:35:53.239011   11611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:35:53.239022   11611 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:35:53.239459   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1126 19:35:53.239148   11611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 19:35:53.239562   11611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 19:35:53.239964   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1126 19:35:53.239992   11611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 19:35:53.239994   11611 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:35:53.240438   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1126 19:35:53.240000   11611 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1126 19:35:53.240474   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1126 19:35:53.240015   11611 out.go:179]   - Using image docker.io/registry:3.0.0
	I1126 19:35:53.240031   11611 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1126 19:35:53.240058   11611 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:35:53.241379   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1126 19:35:53.241740   11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1126 19:35:53.241747   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1126 19:35:53.241750   11611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:35:53.241755   11611 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1126 19:35:53.241763   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 19:35:53.242603   11611 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:35:53.242603   11611 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1126 19:35:53.242618   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1126 19:35:53.242621   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1126 19:35:53.243369   11611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:35:53.243371   11611 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1126 19:35:53.244951   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1126 19:35:53.245842   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.246324   11611 out.go:179]   - Using image docker.io/busybox:stable
	I1126 19:35:53.246330   11611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1126 19:35:53.246789   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.247332   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.247581   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1126 19:35:53.247622   11611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:35:53.247634   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1126 19:35:53.247646   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.247673   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.247845   11611 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:35:53.247863   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1126 19:35:53.248531   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.248585   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.248623   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.249034   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.249064   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.249545   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.250075   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.250098   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.250359   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.250468   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1126 19:35:53.251628   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1126 19:35:53.251664   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.251701   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.251703   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.251725   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.251913   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.252339   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.252394   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.252694   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.253386   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.253418   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.253522   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.253983   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.254108   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.254136   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.254225   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.254316   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.254581   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.254614   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.254797   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.254858   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.254902   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.254975   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1126 19:35:53.255043   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.255302   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.255338   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.255598   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.255632   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.255659   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.255673   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.255687   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.255700   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.255753   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.255996   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.256022   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.256364   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.257345   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.257650   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.257745   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.257775   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.257817   11611 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1126 19:35:53.257984   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.258200   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.258236   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.258416   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:35:53.259190   11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1126 19:35:53.259205   11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1126 19:35:53.261674   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.262076   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:35:53.262125   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:35:53.262300   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	W1126 19:35:53.451606   11611 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39334->192.168.39.123:22: read: connection reset by peer
	I1126 19:35:53.451636   11611 retry.go:31] will retry after 367.345981ms: ssh: handshake failed: read tcp 192.168.39.1:39334->192.168.39.123:22: read: connection reset by peer
	I1126 19:35:53.813000   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1126 19:35:53.828815   11611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1126 19:35:53.828837   11611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1126 19:35:53.893111   11611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:35:53.893170   11611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 19:35:53.912638   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:35:53.947758   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:35:54.049726   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:35:54.115997   11611 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1126 19:35:54.116024   11611 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1126 19:35:54.139396   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:35:54.146998   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:35:54.153926   11611 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1126 19:35:54.153950   11611 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1126 19:35:54.175760   11611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1126 19:35:54.175782   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1126 19:35:54.271854   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:35:54.326596   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:35:54.339279   11611 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1126 19:35:54.339304   11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1126 19:35:54.341098   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 19:35:54.487131   11611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1126 19:35:54.487159   11611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1126 19:35:54.731819   11611 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:35:54.731842   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1126 19:35:54.782607   11611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1126 19:35:54.782631   11611 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1126 19:35:54.819313   11611 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1126 19:35:54.819341   11611 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1126 19:35:55.037559   11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1126 19:35:55.037589   11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1126 19:35:55.087437   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:35:55.143648   11611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1126 19:35:55.143681   11611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1126 19:35:55.346749   11611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:35:55.346776   11611 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1126 19:35:55.359003   11611 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1126 19:35:55.359037   11611 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1126 19:35:55.594541   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:35:55.677904   11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1126 19:35:55.677931   11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1126 19:35:55.817142   11611 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1126 19:35:55.817167   11611 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1126 19:35:55.859789   11611 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:35:55.859817   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1126 19:35:55.901000   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:35:56.167899   11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1126 19:35:56.167924   11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1126 19:35:56.193013   11611 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:35:56.193038   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1126 19:35:56.259821   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:35:56.511509   11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1126 19:35:56.511541   11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1126 19:35:56.822386   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:35:56.944249   11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1126 19:35:56.944277   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1126 19:35:57.253687   11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1126 19:35:57.253711   11611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1126 19:35:57.641955   11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1126 19:35:57.641977   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1126 19:35:57.761873   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.948835531s)
	I1126 19:35:57.761971   11611 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.86874927s)
	I1126 19:35:57.762002   11611 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.868863062s)
	I1126 19:35:57.762005   11611 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1126 19:35:57.762644   11611 node_ready.go:35] waiting up to 6m0s for node "addons-198878" to be "Ready" ...
	I1126 19:35:57.796472   11611 node_ready.go:49] node "addons-198878" is "Ready"
	I1126 19:35:57.796509   11611 node_ready.go:38] duration metric: took 33.828906ms for node "addons-198878" to be "Ready" ...
	I1126 19:35:57.796530   11611 api_server.go:52] waiting for apiserver process to appear ...
	I1126 19:35:57.796587   11611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:35:58.043896   11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1126 19:35:58.043920   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1126 19:35:58.265933   11611 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-198878" context rescaled to 1 replicas
	I1126 19:35:58.407158   11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1126 19:35:58.407203   11611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1126 19:35:58.860254   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1126 19:36:00.704182   11611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1126 19:36:00.706928   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:36:00.707334   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:36:00.707360   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:36:00.707518   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:36:01.033343   11611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1126 19:36:01.326741   11611 addons.go:239] Setting addon gcp-auth=true in "addons-198878"
	I1126 19:36:01.326797   11611 host.go:66] Checking if "addons-198878" exists ...
	I1126 19:36:01.328759   11611 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1126 19:36:01.331455   11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:36:01.331859   11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
	I1126 19:36:01.331880   11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
	I1126 19:36:01.332043   11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
	I1126 19:36:02.976883   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.064198557s)
	I1126 19:36:02.976920   11611 addons.go:495] Verifying addon ingress=true in "addons-198878"
	I1126 19:36:02.977034   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.927284356s)
	I1126 19:36:02.977111   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.837659245s)
	I1126 19:36:02.976987   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.029179445s)
	I1126 19:36:02.977212   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.830179099s)
	I1126 19:36:02.977227   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.705350345s)
	I1126 19:36:02.977287   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.650668035s)
	I1126 19:36:02.977315   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.636195335s)
	I1126 19:36:02.977348   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.889889495s)
	I1126 19:36:02.977412   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.382836318s)
	I1126 19:36:02.977439   11611 addons.go:495] Verifying addon registry=true in "addons-198878"
	I1126 19:36:02.977474   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.07645013s)
	I1126 19:36:02.977541   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.717681249s)
	I1126 19:36:02.977493   11611 addons.go:495] Verifying addon metrics-server=true in "addons-198878"
	I1126 19:36:02.977654   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.155237401s)
	W1126 19:36:02.978165   11611 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:36:02.978194   11611 retry.go:31] will retry after 257.967647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:36:02.977686   11611 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.181082608s)
	I1126 19:36:02.978246   11611 api_server.go:72] duration metric: took 9.75228674s to wait for apiserver process to appear ...
	I1126 19:36:02.978259   11611 api_server.go:88] waiting for apiserver healthz status ...
	I1126 19:36:02.978280   11611 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1126 19:36:02.979374   11611 out.go:179] * Verifying ingress addon...
	I1126 19:36:02.979383   11611 out.go:179] * Verifying registry addon...
	I1126 19:36:02.980103   11611 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-198878 service yakd-dashboard -n yakd-dashboard
	
	I1126 19:36:02.981500   11611 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1126 19:36:02.981746   11611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1126 19:36:02.998500   11611 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:36:02.998518   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:03.001569   11611 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1126 19:36:03.001593   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:03.009166   11611 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I1126 19:36:03.027346   11611 api_server.go:141] control plane version: v1.34.1
	I1126 19:36:03.027376   11611 api_server.go:131] duration metric: took 49.110394ms to wait for apiserver health ...
	I1126 19:36:03.027384   11611 system_pods.go:43] waiting for kube-system pods to appear ...
	W1126 19:36:03.070420   11611 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1126 19:36:03.100737   11611 system_pods.go:59] 17 kube-system pods found
	I1126 19:36:03.100783   11611 system_pods.go:61] "amd-gpu-device-plugin-zt7pv" [ffa55995-0947-4f78-957d-397eb61020a5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:03.100794   11611 system_pods.go:61] "coredns-66bc5c9577-6rsq5" [8ea66335-1b9c-4fc6-8209-2b1db648b79f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:03.100804   11611 system_pods.go:61] "coredns-66bc5c9577-wrds5" [0753c05f-adb3-4630-8ba7-2d36c8c860a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:03.100811   11611 system_pods.go:61] "etcd-addons-198878" [811da56a-2483-43f0-95de-c963c1e4b316] Running
	I1126 19:36:03.100816   11611 system_pods.go:61] "kube-apiserver-addons-198878" [1068adea-c1ab-4663-b8ad-fd2c00001978] Running
	I1126 19:36:03.100821   11611 system_pods.go:61] "kube-controller-manager-addons-198878" [2ee761c0-b054-468a-b51f-9a79467fb150] Running
	I1126 19:36:03.100829   11611 system_pods.go:61] "kube-ingress-dns-minikube" [b0787d04-501a-496c-8f26-3ecc20b7f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:03.100834   11611 system_pods.go:61] "kube-proxy-qcc2j" [6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b] Running
	I1126 19:36:03.100840   11611 system_pods.go:61] "kube-scheduler-addons-198878" [89b438dc-2243-4e7b-86d7-c94c4cc39ccd] Running
	I1126 19:36:03.100849   11611 system_pods.go:61] "metrics-server-85b7d694d7-8krt2" [437ee4fe-01d9-47d9-8864-e19c70cc2b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:03.100858   11611 system_pods.go:61] "nvidia-device-plugin-daemonset-rhjld" [67364572-4090-46f0-bd16-407a2f2eecf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:03.100867   11611 system_pods.go:61] "registry-6b586f9694-frf72" [7122caf5-586e-4824-aa05-e6968244eddd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:03.100875   11611 system_pods.go:61] "registry-creds-764b6fb674-gt5ft" [da7ea709-5fa9-42a2-b62e-a749fa515bdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:03.100883   11611 system_pods.go:61] "registry-proxy-6ltms" [2e78d651-29c0-42f1-a079-f759abd8acb2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:03.100891   11611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wp4z8" [676c6bdf-5a1c-4a1f-b401-7fe966339e87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:03.100899   11611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zj5kh" [a0ca0044-d67c-4946-8baf-0362d9c8c372] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:03.100908   11611 system_pods.go:61] "storage-provisioner" [4aa788b3-9723-47cc-bc23-a6c4b4b2c70d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:03.100916   11611 system_pods.go:74] duration metric: took 73.525897ms to wait for pod list to return data ...
	I1126 19:36:03.100929   11611 default_sa.go:34] waiting for default service account to be created ...
	I1126 19:36:03.110064   11611 default_sa.go:45] found service account: "default"
	I1126 19:36:03.110112   11611 default_sa.go:55] duration metric: took 9.175646ms for default service account to be created ...
	I1126 19:36:03.110123   11611 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 19:36:03.121823   11611 system_pods.go:86] 17 kube-system pods found
	I1126 19:36:03.121867   11611 system_pods.go:89] "amd-gpu-device-plugin-zt7pv" [ffa55995-0947-4f78-957d-397eb61020a5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1126 19:36:03.121879   11611 system_pods.go:89] "coredns-66bc5c9577-6rsq5" [8ea66335-1b9c-4fc6-8209-2b1db648b79f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:03.121891   11611 system_pods.go:89] "coredns-66bc5c9577-wrds5" [0753c05f-adb3-4630-8ba7-2d36c8c860a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:36:03.121897   11611 system_pods.go:89] "etcd-addons-198878" [811da56a-2483-43f0-95de-c963c1e4b316] Running
	I1126 19:36:03.121904   11611 system_pods.go:89] "kube-apiserver-addons-198878" [1068adea-c1ab-4663-b8ad-fd2c00001978] Running
	I1126 19:36:03.121910   11611 system_pods.go:89] "kube-controller-manager-addons-198878" [2ee761c0-b054-468a-b51f-9a79467fb150] Running
	I1126 19:36:03.121918   11611 system_pods.go:89] "kube-ingress-dns-minikube" [b0787d04-501a-496c-8f26-3ecc20b7f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:36:03.121926   11611 system_pods.go:89] "kube-proxy-qcc2j" [6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b] Running
	I1126 19:36:03.121932   11611 system_pods.go:89] "kube-scheduler-addons-198878" [89b438dc-2243-4e7b-86d7-c94c4cc39ccd] Running
	I1126 19:36:03.121940   11611 system_pods.go:89] "metrics-server-85b7d694d7-8krt2" [437ee4fe-01d9-47d9-8864-e19c70cc2b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:36:03.121953   11611 system_pods.go:89] "nvidia-device-plugin-daemonset-rhjld" [67364572-4090-46f0-bd16-407a2f2eecf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:36:03.121962   11611 system_pods.go:89] "registry-6b586f9694-frf72" [7122caf5-586e-4824-aa05-e6968244eddd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:36:03.121973   11611 system_pods.go:89] "registry-creds-764b6fb674-gt5ft" [da7ea709-5fa9-42a2-b62e-a749fa515bdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:36:03.121980   11611 system_pods.go:89] "registry-proxy-6ltms" [2e78d651-29c0-42f1-a079-f759abd8acb2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:36:03.121989   11611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wp4z8" [676c6bdf-5a1c-4a1f-b401-7fe966339e87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:03.121999   11611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zj5kh" [a0ca0044-d67c-4946-8baf-0362d9c8c372] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:36:03.122010   11611 system_pods.go:89] "storage-provisioner" [4aa788b3-9723-47cc-bc23-a6c4b4b2c70d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:36:03.122021   11611 system_pods.go:126] duration metric: took 11.891567ms to wait for k8s-apps to be running ...
	I1126 19:36:03.122036   11611 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 19:36:03.122114   11611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:36:03.236505   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:36:03.491222   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:03.493983   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:03.990732   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:03.997049   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:04.096013   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.235700075s)
	I1126 19:36:04.096044   11611 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-198878"
	I1126 19:36:04.096045   11611 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.767266622s)
	I1126 19:36:04.096141   11611 system_svc.go:56] duration metric: took 974.10069ms WaitForService to wait for kubelet
	I1126 19:36:04.096167   11611 kubeadm.go:587] duration metric: took 10.870208638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:36:04.096188   11611 node_conditions.go:102] verifying NodePressure condition ...
	I1126 19:36:04.097316   11611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:36:04.097411   11611 out.go:179] * Verifying csi-hostpath-driver addon...
	I1126 19:36:04.098504   11611 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1126 19:36:04.099341   11611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1126 19:36:04.099527   11611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1126 19:36:04.099544   11611 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1126 19:36:04.145634   11611 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1126 19:36:04.145661   11611 node_conditions.go:123] node cpu capacity is 2
	I1126 19:36:04.145673   11611 node_conditions.go:105] duration metric: took 49.479037ms to run NodePressure ...
	I1126 19:36:04.145684   11611 start.go:242] waiting for startup goroutines ...
	I1126 19:36:04.178149   11611 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:36:04.178175   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:04.192444   11611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1126 19:36:04.192475   11611 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1126 19:36:04.298631   11611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:36:04.298651   11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1126 19:36:04.426979   11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:36:04.491864   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:04.492017   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:04.608072   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:04.989458   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:04.993054   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:05.110357   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:05.239352   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.002801877s)
	I1126 19:36:05.496466   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:05.497651   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:05.638671   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:05.699035   11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.272006592s)
	I1126 19:36:05.700487   11611 addons.go:495] Verifying addon gcp-auth=true in "addons-198878"
	I1126 19:36:05.702062   11611 out.go:179] * Verifying gcp-auth addon...
	I1126 19:36:05.704026   11611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1126 19:36:05.760626   11611 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1126 19:36:05.760651   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:05.994444   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:05.994690   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:06.106998   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:06.211840   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:06.499948   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:06.501279   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:06.620062   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:06.709415   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:06.990625   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:06.991421   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:07.107002   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:07.211382   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:07.486148   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:07.486458   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:07.605466   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:07.707711   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:07.987522   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:07.992346   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:08.106370   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:08.208794   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:08.486674   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:08.487151   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:08.608902   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:08.711370   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:08.986687   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:08.989490   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:09.105298   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:09.208491   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:09.486616   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:09.486835   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:09.605673   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:09.708106   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:09.985749   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:09.987391   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:10.105701   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:10.211150   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:10.488702   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:10.489002   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:10.603570   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:10.711877   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:10.986548   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:10.987317   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:11.103494   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:11.209118   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:11.485287   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:11.485510   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:11.605336   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:11.708777   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:11.987357   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:11.987770   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:12.104796   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:12.208477   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:12.487069   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:12.492496   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:12.606045   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:12.709286   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:12.987760   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:12.988937   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:13.105165   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:13.207689   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:13.487332   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:13.491095   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:13.718664   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:13.720994   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:13.987104   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:13.987140   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:14.104040   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:14.209020   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:14.487879   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:14.489777   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:14.605988   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:14.710793   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:14.986386   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:14.986995   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:15.114075   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:15.209337   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:15.486577   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:15.486650   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:15.603945   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:15.716202   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:15.989416   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:15.993517   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:16.105160   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:16.209295   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:16.489446   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:16.490216   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:16.606249   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:16.709977   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:16.989823   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:16.992439   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:17.103629   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:17.208279   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:17.487267   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:17.487401   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:17.603584   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:17.709320   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:17.991005   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:17.993415   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:18.105491   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:18.208691   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:18.486462   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:18.486468   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:18.603285   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:18.717879   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:19.255171   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:19.264748   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:19.264807   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:19.264958   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:19.485362   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:19.487219   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:19.605144   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:19.708638   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:19.986278   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:19.986527   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:20.106477   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:20.212070   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:20.486233   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:20.487248   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:20.606853   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:20.709268   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:20.988700   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:20.988792   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:21.104582   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:21.207442   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:21.488192   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:21.488327   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:21.605590   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:21.712171   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:21.989471   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:21.989907   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:22.105177   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:22.208563   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:22.494405   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:22.495848   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:22.608423   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:22.709066   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:22.989727   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:22.990050   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:23.109011   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:23.211002   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:23.486254   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:23.487368   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:23.606886   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:23.712661   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:23.988158   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:23.988623   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:24.107501   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:24.208703   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:24.489208   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:24.492980   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:24.607265   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:24.875787   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:25.022444   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:25.022536   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:25.103370   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:25.208331   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:25.485269   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:25.487836   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:25.603719   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:25.710451   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:25.988844   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:25.991143   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:26.105799   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:26.208159   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:26.487941   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:26.488197   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:26.610816   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:26.711438   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:26.988554   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:26.988678   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:27.104906   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:27.211687   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:27.490678   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:27.491397   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:27.605422   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:27.709918   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:27.986857   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:27.991254   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:28.104504   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:28.211676   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:28.486765   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:28.486908   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:28.603951   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:28.710810   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:29.039950   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:29.040032   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:29.104880   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:29.210332   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:29.486765   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:29.490387   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:29.605809   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:29.709672   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:30.234603   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:30.234694   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:30.234910   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:30.235218   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:30.487643   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:30.487909   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:30.602666   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:30.709504   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:30.986189   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:30.987847   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:31.106001   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:31.208478   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:31.486843   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:31.486859   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:31.603556   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:31.708238   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:31.988227   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:31.989189   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:32.104678   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:32.208363   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:32.486531   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:32.488065   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:32.608529   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:32.825732   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:33.004727   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:33.009413   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:33.432039   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:33.432854   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:33.488777   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:33.489648   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:33.603318   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:33.711296   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:33.986508   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:33.986537   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:34.105666   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:34.208395   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:34.486866   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:34.488238   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:34.603881   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:34.709073   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:34.986378   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:34.987365   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:35.106037   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:35.211738   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:35.493191   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:35.493517   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:35.605567   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:35.712771   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:35.987227   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:35.987860   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:36.103650   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:36.211521   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:36.486814   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:36.487072   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:36.603590   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:36.708474   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:36.986458   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:36:36.987153   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:37.109079   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:37.208801   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:37.488353   11611 kapi.go:107] duration metric: took 34.506606428s to wait for kubernetes.io/minikube-addons=registry ...
	I1126 19:36:37.489542   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:37.606541   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:37.713836   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:37.985587   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:38.107437   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:38.212647   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:38.488990   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:38.604377   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:38.711757   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:38.989079   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:39.107014   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:39.208897   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:39.489856   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:39.604340   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:39.709729   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:39.989251   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:40.105384   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:40.210689   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:40.486868   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:40.605226   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:40.708011   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:40.987195   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:41.104281   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:41.208558   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:41.485989   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:41.635352   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:41.709968   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:41.990231   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:42.105439   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:42.212031   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:42.485649   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:42.608693   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:42.710655   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:42.985249   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:43.105161   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:43.211444   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:43.485775   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:43.618058   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:43.716692   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:43.987323   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:44.104582   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:44.209761   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:44.488009   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:44.605945   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:44.711855   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:44.985558   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:45.102863   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:45.208767   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:45.488476   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:45.603931   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:45.708797   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:45.988957   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:46.107402   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:46.210590   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:46.488873   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:46.603309   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:46.708069   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:47.037669   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:47.104273   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:47.209723   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:47.485534   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:47.616379   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:47.714613   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:47.985314   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:48.109393   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:48.208135   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:48.488020   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:48.604524   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:48.707527   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:48.985308   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:49.105713   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:49.210053   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:49.487312   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:49.603701   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:49.710408   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:49.989190   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:50.115043   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:50.212246   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:50.488332   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:50.604130   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:50.709533   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:50.989570   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:51.102993   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:51.212397   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:51.760175   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:51.760434   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:51.764040   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:51.998386   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:52.194692   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:52.212283   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:52.490622   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:52.605594   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:52.717977   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:52.989416   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:53.104503   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:53.208891   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:53.486553   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:53.603741   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:53.710661   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:53.985350   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:54.105351   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:54.212996   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:54.492732   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:54.603628   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:54.711509   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:54.988583   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:55.103408   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:55.209271   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:55.489909   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:55.605340   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:55.716521   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:55.993894   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:56.104480   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:56.208963   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:56.486056   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:56.607858   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:56.713768   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:56.985661   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:57.104840   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:57.210105   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:57.489224   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:57.607303   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:57.711293   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:57.988507   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:58.103586   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:58.207171   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:58.486568   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:58.603857   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:58.708860   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:58.989635   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:59.103598   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:59.207682   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:59.487022   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:36:59.605864   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:36:59.708266   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:36:59.987009   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:00.103719   11611 kapi.go:107] duration metric: took 56.004374347s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1126 19:37:00.207837   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:00.485558   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:00.708048   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:00.986648   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:01.208166   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:01.485905   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:01.708569   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:01.985281   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:02.208363   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:02.485510   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:02.707668   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:02.985372   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:03.207610   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:03.485945   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:03.707439   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:03.985728   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:04.208702   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:04.485102   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:04.707198   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:04.986469   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:05.208201   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:05.603565   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:05.709549   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:05.985796   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:06.208395   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:06.485037   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:06.709181   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:06.986721   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:07.209727   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:07.487684   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:07.709785   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:07.989479   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:08.208824   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:08.486140   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:08.709162   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:08.986162   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:09.209752   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:09.486267   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:09.710977   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:09.987974   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:10.212284   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:10.485920   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:10.711250   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:10.989859   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:11.209576   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:11.485279   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:11.709675   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:11.986221   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:12.212281   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:12.487229   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:12.708565   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:12.998518   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:13.209189   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:13.485099   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:13.711987   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:13.986703   11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:14.208472   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:14.489290   11611 kapi.go:107] duration metric: took 1m11.5077899s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1126 19:37:14.708080   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:15.208068   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:15.713135   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:16.209616   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:16.707251   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:17.211697   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:17.707828   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:18.208209   11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:18.709028   11611 kapi.go:107] duration metric: took 1m13.004999661s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1126 19:37:18.710751   11611 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-198878 cluster.
	I1126 19:37:18.712060   11611 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1126 19:37:18.713287   11611 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1126 19:37:18.714553   11611 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1126 19:37:18.715743   11611 addons.go:530] duration metric: took 1m25.489739712s for enable addons: enabled=[cloud-spanner storage-provisioner registry-creds nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget ingress-dns metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1126 19:37:18.715783   11611 start.go:247] waiting for cluster config update ...
	I1126 19:37:18.715806   11611 start.go:256] writing updated cluster config ...
	I1126 19:37:18.716055   11611 ssh_runner.go:195] Run: rm -f paused
	I1126 19:37:18.723210   11611 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:37:18.726773   11611 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rsq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:18.733896   11611 pod_ready.go:94] pod "coredns-66bc5c9577-6rsq5" is "Ready"
	I1126 19:37:18.733920   11611 pod_ready.go:86] duration metric: took 7.129783ms for pod "coredns-66bc5c9577-6rsq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:18.736723   11611 pod_ready.go:83] waiting for pod "etcd-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:18.743667   11611 pod_ready.go:94] pod "etcd-addons-198878" is "Ready"
	I1126 19:37:18.743685   11611 pod_ready.go:86] duration metric: took 6.947246ms for pod "etcd-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:18.746334   11611 pod_ready.go:83] waiting for pod "kube-apiserver-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:18.751960   11611 pod_ready.go:94] pod "kube-apiserver-addons-198878" is "Ready"
	I1126 19:37:18.751976   11611 pod_ready.go:86] duration metric: took 5.627398ms for pod "kube-apiserver-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:18.754570   11611 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:19.127739   11611 pod_ready.go:94] pod "kube-controller-manager-addons-198878" is "Ready"
	I1126 19:37:19.127763   11611 pod_ready.go:86] duration metric: took 373.177086ms for pod "kube-controller-manager-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:19.326895   11611 pod_ready.go:83] waiting for pod "kube-proxy-qcc2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:19.728612   11611 pod_ready.go:94] pod "kube-proxy-qcc2j" is "Ready"
	I1126 19:37:19.728636   11611 pod_ready.go:86] duration metric: took 401.717594ms for pod "kube-proxy-qcc2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:19.928532   11611 pod_ready.go:83] waiting for pod "kube-scheduler-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:20.328225   11611 pod_ready.go:94] pod "kube-scheduler-addons-198878" is "Ready"
	I1126 19:37:20.328257   11611 pod_ready.go:86] duration metric: took 399.701466ms for pod "kube-scheduler-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:37:20.328273   11611 pod_ready.go:40] duration metric: took 1.60503412s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:37:20.373111   11611 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 19:37:20.374807   11611 out.go:179] * Done! kubectl is now configured to use "addons-198878" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.315099893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764186026315073311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4e093ff-931c-44a5-8395-01a3e07c5f01 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.317775114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd4f3cbf-71d8-4a19-bafe-e872e7c167d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.317926755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd4f3cbf-71d8-4a19-bafe-e872e7c167d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.318354872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d687b1cb2df562e6432704428175fe90e123a2ab6d6b328bf7647c520c27a014,PodSandboxId:117709cb2d9a99b6a122c70ad16d593f25c596424c2f42c0a849877584578945,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764185882028877693,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb70cb39-5ff1-4d2b-b014-86048256ca26,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dcc6e175d9f7e28a744e9f7320c1aead308bd08082d3013028cd9b54bd13471,PodSandboxId:a702030515d60f4a28f4e3dbea4be830f9053c8aa7b80c925cc1e3232c7ba49b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764185843889580516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b398ed93-d3e3-42f0-9ff8-eb0a88b0786a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8e34a5c12ee5bd5870e5fde9ddbd80fb4f59a62e257b63644bce1d3dadd28a,PodSandboxId:9f66c7c3ecc73fe5bdbf87ab63e873a1c11cfc56983a354e5833a25a3759eccd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764185834250659196,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-dg8xd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 21f3aeeb-571c-46e7-a767-3ebdf23216ba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fe1ba22354a6d44e7434a4850160b53d2fa3b4b2ecd587f25a1eda6ed9eba5a,PodSandboxId:d9b1194cb11d531db60527c495a37a19111b0b178f42626fc560d9931e6eada4,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764185825280374108,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjbkr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73f9be4a-f818-4127-8255-899e6c553774,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557a2d941c8b530b3be0bf133e15ba9c2cf242beb634d3ec48c54ae49a910a0b,PodSandboxId:d1ef08ee18a309e28a2cdd922207017eda217d545de266fd218e7e14e5e19bb9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764185810222114056,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7hjrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37c4d2b9-d287-4912-812c-3a3720e73da3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d295efad8604d4f06038ded7e4666d028c07e5a3dd06d7a300f3f5d9815bd1a,PodSandboxId:e130e2b91c88169054068a8c1954e9e72f7652c668f166882ad2e64d9b35d929,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1764185798022440827,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-p6gkd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c82a99a0-a467-4df0-8c9c-5b91d82b7c2c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cfada7e968356cd9f43fd22734f1215f0689883dc41088e7749e552cca56,PodSandboxId:157f9cb5e65f4aadd5f806c7710bee54cbcd39729d2abc02ec7fd597e2332159,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764185793626661239,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0787d04-501a-496c-8f26-3ecc20b7f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0a7ddcca4f6ddea2e8dbd02f945a65c656c5197043e0794e89ecca0f9ba35e,PodSandboxId:b111919f515bb107f30a72367481b2ad00231ccc00abc472e0275b5773f0bafc,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764185766088243456,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zt7pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa55995-0947-4f78-957d-397eb61020a5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6560862725d64d696c066e82421c3884ddf391f1b99d75d5dd1cd0f58c1b7171,PodSandboxId:beb1aaf56a9902d3e869b77102b5fb5a84429746788ea9e70c7824773c07a8ba,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764185762423846869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa788b3-9723-47cc-bc23-a6c4b4b2c70d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80d4f44f2fa1846c12019372134bbc488b52b1bf782994f77475b273b5ba0c2,PodSandboxId:45857bd9cf26136267ebdd22104ff9a9ca6338f23dbb3f904a5f7c40737056c2,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764185755165528956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6rsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea66335-1b9c-4fc6-8209-2b1db648b79f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1122a8f344ffeb1984b48a8747125afcf335fc5a63852549889a34e589dbdd4,PodSandboxId:6b5744ddd945ed364619d478d9078f8b816d04184d4c1a2c8d782c4a209ad500,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764185754384804082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039227d5c32668d7a544e8dab27d32bdcf7a9493668abe7212bccfe5a90a90ad,PodSandboxId:8458f2d683addadc809c074ee3e60968b8338397c423646019270e6ca248d596,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764185742633940598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f116ea56476240aa27cfbf6746e3fb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be95f29d66ce013ac18e78d2b0377cc9ae86b14f1ef875c74630026b4a5651f,PodSandboxId:8c92f4c64e71110c7c6279c6db27b847b287d84cd978533f96bdbdebf53a4e5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764185742396560035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3113d9f2abbedcec3e63d05ab89f093,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0bebab37664092b74b019d66b6138297725eedc3ae1f6f34da142e4e366a169,PodSandboxId:5f66c1af5779926c722ab379bc0e661342a3a7e5dafeeaf0f21fd98b328d8ccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764185742232176926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a85665ce676
3c8ae58422727e16d0b19,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81f4d14092622fa6ce01768db00090dc0df11716cb9afc5a98f33684c32db95,PodSandboxId:bb05b454535e0836da60a5060942a7c38d0282c9389a977d97c31fd2f28b8836,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764185742162776588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b81724d889d88c6b1304a265b5f3c84,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd4f3cbf-71d8-4a19-bafe-e872e7c167d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.366712554Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ae47ff3-03b9-4986-aac8-3c78022b0646 name=/runtime.v1.RuntimeService/Version
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.366892429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ae47ff3-03b9-4986-aac8-3c78022b0646 name=/runtime.v1.RuntimeService/Version
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.368460729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa25886a-4d61-43d4-9ee6-a0a15b451fe3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.369747846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764186026369724438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa25886a-4d61-43d4-9ee6-a0a15b451fe3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.370627368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98597494-f999-4f5e-9b2c-c49bb17ffdb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.370728770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98597494-f999-4f5e-9b2c-c49bb17ffdb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.371086209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d687b1cb2df562e6432704428175fe90e123a2ab6d6b328bf7647c520c27a014,PodSandboxId:117709cb2d9a99b6a122c70ad16d593f25c596424c2f42c0a849877584578945,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764185882028877693,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb70cb39-5ff1-4d2b-b014-86048256ca26,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dcc6e175d9f7e28a744e9f7320c1aead308bd08082d3013028cd9b54bd13471,PodSandboxId:a702030515d60f4a28f4e3dbea4be830f9053c8aa7b80c925cc1e3232c7ba49b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764185843889580516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b398ed93-d3e3-42f0-9ff8-eb0a88b0786a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8e34a5c12ee5bd5870e5fde9ddbd80fb4f59a62e257b63644bce1d3dadd28a,PodSandboxId:9f66c7c3ecc73fe5bdbf87ab63e873a1c11cfc56983a354e5833a25a3759eccd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764185834250659196,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-dg8xd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 21f3aeeb-571c-46e7-a767-3ebdf23216ba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fe1ba22354a6d44e7434a4850160b53d2fa3b4b2ecd587f25a1eda6ed9eba5a,PodSandboxId:d9b1194cb11d531db60527c495a37a19111b0b178f42626fc560d9931e6eada4,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764185825280374108,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjbkr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73f9be4a-f818-4127-8255-899e6c553774,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557a2d941c8b530b3be0bf133e15ba9c2cf242beb634d3ec48c54ae49a910a0b,PodSandboxId:d1ef08ee18a309e28a2cdd922207017eda217d545de266fd218e7e14e5e19bb9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764185810222114056,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7hjrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37c4d2b9-d287-4912-812c-3a3720e73da3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d295efad8604d4f06038ded7e4666d028c07e5a3dd06d7a300f3f5d9815bd1a,PodSandboxId:e130e2b91c88169054068a8c1954e9e72f7652c668f166882ad2e64d9b35d929,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1764185798022440827,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-p6gkd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c82a99a0-a467-4df0-8c9c-5b91d82b7c2c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cfada7e968356cd9f43fd22734f1215f0689883dc41088e7749e552cca56,PodSandboxId:157f9cb5e65f4aadd5f806c7710bee54cbcd39729d2abc02ec7fd597e2332159,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764185793626661239,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0787d04-501a-496c-8f26-3ecc20b7f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0a7ddcca4f6ddea2e8dbd02f945a65c656c5197043e0794e89ecca0f9ba35e,PodSandboxId:b111919f515bb107f30a72367481b2ad00231ccc00abc472e0275b5773f0bafc,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764185766088243456,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zt7pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa55995-0947-4f78-957d-397eb61020a5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6560862725d64d696c066e82421c3884ddf391f1b99d75d5dd1cd0f58c1b7171,PodSandboxId:beb1aaf56a9902d3e869b77102b5fb5a84429746788ea9e70c7824773c07a8ba,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764185762423846869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa788b3-9723-47cc-bc23-a6c4b4b2c70d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80d4f44f2fa1846c12019372134bbc488b52b1bf782994f77475b273b5ba0c2,PodSandboxId:45857bd9cf26136267ebdd22104ff9a9ca6338f23dbb3f904a5f7c40737056c2,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764185755165528956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6rsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea66335-1b9c-4fc6-8209-2b1db648b79f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1122a8f344ffeb1984b48a8747125afcf335fc5a63852549889a34e589dbdd4,PodSandboxId:6b5744ddd945ed364619d478d9078f8b816d04184d4c1a2c8d782c4a209ad500,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764185754384804082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039227d5c32668d7a544e8dab27d32bdcf7a9493668abe7212bccfe5a90a90ad,PodSandboxId:8458f2d683addadc809c074ee3e60968b8338397c423646019270e6ca248d596,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764185742633940598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f116ea56476240aa27cfbf6746e3fb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be95f29d66ce013ac18e78d2b0377cc9ae86b14f1ef875c74630026b4a5651f,PodSandboxId:8c92f4c64e71110c7c6279c6db27b847b287d84cd978533f96bdbdebf53a4e5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764185742396560035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3113d9f2abbedcec3e63d05ab89f093,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0bebab37664092b74b019d66b6138297725eedc3ae1f6f34da142e4e366a169,PodSandboxId:5f66c1af5779926c722ab379bc0e661342a3a7e5dafeeaf0f21fd98b328d8ccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764185742232176926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a85665ce676
3c8ae58422727e16d0b19,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81f4d14092622fa6ce01768db00090dc0df11716cb9afc5a98f33684c32db95,PodSandboxId:bb05b454535e0836da60a5060942a7c38d0282c9389a977d97c31fd2f28b8836,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764185742162776588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b81724d889d88c6b1304a265b5f3c84,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98597494-f999-4f5e-9b2c-c49bb17ffdb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.414686686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8620abf-ba74-4da9-afd1-3738fc6173a3 name=/runtime.v1.RuntimeService/Version
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.414786996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8620abf-ba74-4da9-afd1-3738fc6173a3 name=/runtime.v1.RuntimeService/Version
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.416074819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fa8bb05-df6e-4b32-b514-ebd6fd1b7603 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.417525659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764186026417498887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fa8bb05-df6e-4b32-b514-ebd6fd1b7603 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.418772850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a247095-6db0-439a-a430-131bb0d3f8a1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.418826093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a247095-6db0-439a-a430-131bb0d3f8a1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.419289166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d687b1cb2df562e6432704428175fe90e123a2ab6d6b328bf7647c520c27a014,PodSandboxId:117709cb2d9a99b6a122c70ad16d593f25c596424c2f42c0a849877584578945,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764185882028877693,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb70cb39-5ff1-4d2b-b014-86048256ca26,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dcc6e175d9f7e28a744e9f7320c1aead308bd08082d3013028cd9b54bd13471,PodSandboxId:a702030515d60f4a28f4e3dbea4be830f9053c8aa7b80c925cc1e3232c7ba49b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764185843889580516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b398ed93-d3e3-42f0-9ff8-eb0a88b0786a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8e34a5c12ee5bd5870e5fde9ddbd80fb4f59a62e257b63644bce1d3dadd28a,PodSandboxId:9f66c7c3ecc73fe5bdbf87ab63e873a1c11cfc56983a354e5833a25a3759eccd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764185834250659196,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-dg8xd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 21f3aeeb-571c-46e7-a767-3ebdf23216ba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fe1ba22354a6d44e7434a4850160b53d2fa3b4b2ecd587f25a1eda6ed9eba5a,PodSandboxId:d9b1194cb11d531db60527c495a37a19111b0b178f42626fc560d9931e6eada4,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764185825280374108,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjbkr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73f9be4a-f818-4127-8255-899e6c553774,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557a2d941c8b530b3be0bf133e15ba9c2cf242beb634d3ec48c54ae49a910a0b,PodSandboxId:d1ef08ee18a309e28a2cdd922207017eda217d545de266fd218e7e14e5e19bb9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764185810222114056,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7hjrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37c4d2b9-d287-4912-812c-3a3720e73da3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d295efad8604d4f06038ded7e4666d028c07e5a3dd06d7a300f3f5d9815bd1a,PodSandboxId:e130e2b91c88169054068a8c1954e9e72f7652c668f166882ad2e64d9b35d929,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1764185798022440827,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-p6gkd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c82a99a0-a467-4df0-8c9c-5b91d82b7c2c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cfada7e968356cd9f43fd22734f1215f0689883dc41088e7749e552cca56,PodSandboxId:157f9cb5e65f4aadd5f806c7710bee54cbcd39729d2abc02ec7fd597e2332159,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764185793626661239,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0787d04-501a-496c-8f26-3ecc20b7f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0a7ddcca4f6ddea2e8dbd02f945a65c656c5197043e0794e89ecca0f9ba35e,PodSandboxId:b111919f515bb107f30a72367481b2ad00231ccc00abc472e0275b5773f0bafc,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764185766088243456,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zt7pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa55995-0947-4f78-957d-397eb61020a5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6560862725d64d696c066e82421c3884ddf391f1b99d75d5dd1cd0f58c1b7171,PodSandboxId:beb1aaf56a9902d3e869b77102b5fb5a84429746788ea9e70c7824773c07a8ba,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764185762423846869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa788b3-9723-47cc-bc23-a6c4b4b2c70d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80d4f44f2fa1846c12019372134bbc488b52b1bf782994f77475b273b5ba0c2,PodSandboxId:45857bd9cf26136267ebdd22104ff9a9ca6338f23dbb3f904a5f7c40737056c2,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764185755165528956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6rsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea66335-1b9c-4fc6-8209-2b1db648b79f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1122a8f344ffeb1984b48a8747125afcf335fc5a63852549889a34e589dbdd4,PodSandboxId:6b5744ddd945ed364619d478d9078f8b816d04184d4c1a2c8d782c4a209ad500,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764185754384804082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039227d5c32668d7a544e8dab27d32bdcf7a9493668abe7212bccfe5a90a90ad,PodSandboxId:8458f2d683addadc809c074ee3e60968b8338397c423646019270e6ca248d596,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764185742633940598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f116ea56476240aa27cfbf6746e3fb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be95f29d66ce013ac18e78d2b0377cc9ae86b14f1ef875c74630026b4a5651f,PodSandboxId:8c92f4c64e71110c7c6279c6db27b847b287d84cd978533f96bdbdebf53a4e5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764185742396560035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3113d9f2abbedcec3e63d05ab89f093,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0bebab37664092b74b019d66b6138297725eedc3ae1f6f34da142e4e366a169,PodSandboxId:5f66c1af5779926c722ab379bc0e661342a3a7e5dafeeaf0f21fd98b328d8ccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764185742232176926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a85665ce676
3c8ae58422727e16d0b19,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81f4d14092622fa6ce01768db00090dc0df11716cb9afc5a98f33684c32db95,PodSandboxId:bb05b454535e0836da60a5060942a7c38d0282c9389a977d97c31fd2f28b8836,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764185742162776588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b81724d889d88c6b1304a265b5f3c84,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a247095-6db0-439a-a430-131bb0d3f8a1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.431757076Z" level=debug msg="ImagePull (2): docker.io/kicbase/echo-server:1.0 (sha256:a055a10ed683d0944c17c642f7cf3259b524ceb32317ec887513b018e67aed1e): 2135952 bytes (100.00%)" file="server/image_pull.go:276" id=e6da90bb-fbde-4165-a7c7-fbd8ecfa7842 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.431999310Z" level=debug msg="No compression detected" file="compression/compression.go:133"
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.432232103Z" level=debug msg="Compression change for blob sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30 (\"application/vnd.docker.container.image.v1+json\") not supported" file="copy/compression.go:91"
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.432266579Z" level=debug msg="Using original blob without modification" file="copy/compression.go:226"
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.432748397Z" level=debug msg="ImagePull (0): docker.io/kicbase/echo-server:1.0 (sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30): 0 bytes (0.00%)" file="server/image_pull.go:276" id=e6da90bb-fbde-4165-a7c7-fbd8ecfa7842 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.446626696Z" level=debug msg="ImagePull (2): docker.io/kicbase/echo-server:1.0 (sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30): 1197 bytes (100.00%)" file="server/image_pull.go:276" id=e6da90bb-fbde-4165-a7c7-fbd8ecfa7842 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.446911272Z" level=debug msg="setting image creation date to 2022-07-10 23:15:54.185884751 +0000 UTC" file="storage/storage_dest.go:775"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	d687b1cb2df56       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   117709cb2d9a9       nginx                                      default
	3dcc6e175d9f7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   a702030515d60       busybox                                    default
	4d8e34a5c12ee       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   9f66c7c3ecc73       ingress-nginx-controller-6c8bf45fb-dg8xd   ingress-nginx
	8fe1ba22354a6       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                             3 minutes ago       Exited              patch                     2                   d9b1194cb11d5       ingress-nginx-admission-patch-cjbkr        ingress-nginx
	557a2d941c8b5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   d1ef08ee18a30       ingress-nginx-admission-create-7hjrv       ingress-nginx
	3d295efad8604       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   e130e2b91c881       local-path-provisioner-648f6765c9-p6gkd    local-path-storage
	55f5cfada7e96       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   157f9cb5e65f4       kube-ingress-dns-minikube                  kube-system
	ac0a7ddcca4f6       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   b111919f515bb       amd-gpu-device-plugin-zt7pv                kube-system
	6560862725d64       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   beb1aaf56a990       storage-provisioner                        kube-system
	e80d4f44f2fa1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   45857bd9cf261       coredns-66bc5c9577-6rsq5                   kube-system
	d1122a8f344ff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   6b5744ddd945e       kube-proxy-qcc2j                           kube-system
	039227d5c3266       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago       Running             kube-scheduler            0                   8458f2d683add       kube-scheduler-addons-198878               kube-system
	5be95f29d66ce       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago       Running             kube-controller-manager   0                   8c92f4c64e711       kube-controller-manager-addons-198878      kube-system
	c0bebab376640       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   5f66c1af57799       etcd-addons-198878                         kube-system
	a81f4d1409262       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago       Running             kube-apiserver            0                   bb05b454535e0       kube-apiserver-addons-198878               kube-system
	
	
	==> coredns [e80d4f44f2fa1846c12019372134bbc488b52b1bf782994f77475b273b5ba0c2] <==
	[INFO] 10.244.0.8:33561 - 30151 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000298692s
	[INFO] 10.244.0.8:33561 - 64276 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000120548s
	[INFO] 10.244.0.8:33561 - 37248 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000112972s
	[INFO] 10.244.0.8:33561 - 69 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000113303s
	[INFO] 10.244.0.8:33561 - 8021 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.0001298s
	[INFO] 10.244.0.8:33561 - 61068 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000275043s
	[INFO] 10.244.0.8:33561 - 36733 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000083602s
	[INFO] 10.244.0.8:48716 - 54185 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00011778s
	[INFO] 10.244.0.8:48716 - 54412 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000139693s
	[INFO] 10.244.0.8:51224 - 20145 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085443s
	[INFO] 10.244.0.8:51224 - 20419 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059093s
	[INFO] 10.244.0.8:53670 - 49812 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116573s
	[INFO] 10.244.0.8:53670 - 50095 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093052s
	[INFO] 10.244.0.8:44853 - 31586 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108098s
	[INFO] 10.244.0.8:44853 - 31773 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066283s
	[INFO] 10.244.0.23:52964 - 12280 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00043788s
	[INFO] 10.244.0.23:49117 - 28544 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000242505s
	[INFO] 10.244.0.23:49898 - 20618 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144907s
	[INFO] 10.244.0.23:56055 - 12240 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000260249s
	[INFO] 10.244.0.23:42306 - 48219 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164159s
	[INFO] 10.244.0.23:36251 - 32931 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00083501s
	[INFO] 10.244.0.23:60569 - 28083 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001313397s
	[INFO] 10.244.0.23:41101 - 63791 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00394348s
	[INFO] 10.244.0.29:55555 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001709973s
	[INFO] 10.244.0.29:33895 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169502s
	
	
	==> describe nodes <==
	Name:               addons-198878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-198878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=addons-198878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_35_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-198878
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:35:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-198878
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 19:40:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 19:38:52 +0000   Wed, 26 Nov 2025 19:35:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 19:38:52 +0000   Wed, 26 Nov 2025 19:35:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 19:38:52 +0000   Wed, 26 Nov 2025 19:35:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 19:38:52 +0000   Wed, 26 Nov 2025 19:35:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    addons-198878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a31c91d5706460a99595cc9b1ab6144
	  System UUID:                3a31c91d-5706-460a-9959-5cc9b1ab6144
	  Boot ID:                    bc5d73a4-1281-4d1e-819c-176197babf67
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     hello-world-app-5d498dc89-tkxwx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-dg8xd    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m24s
	  kube-system                 amd-gpu-device-plugin-zt7pv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 coredns-66bc5c9577-6rsq5                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m33s
	  kube-system                 etcd-addons-198878                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m38s
	  kube-system                 kube-apiserver-addons-198878                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-controller-manager-addons-198878       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-proxy-qcc2j                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-scheduler-addons-198878                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  local-path-storage          local-path-provisioner-648f6765c9-p6gkd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m46s)  kubelet          Node addons-198878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m46s)  kubelet          Node addons-198878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x7 over 4m46s)  kubelet          Node addons-198878 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m38s                  kubelet          Node addons-198878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s                  kubelet          Node addons-198878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s                  kubelet          Node addons-198878 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m37s                  kubelet          Node addons-198878 status is now: NodeReady
	  Normal  RegisteredNode           4m34s                  node-controller  Node addons-198878 event: Registered Node addons-198878 in Controller
	
	
	==> dmesg <==
	[Nov26 19:36] kauditd_printk_skb: 344 callbacks suppressed
	[  +5.630397] kauditd_printk_skb: 347 callbacks suppressed
	[  +7.669119] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.772707] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.286009] kauditd_printk_skb: 17 callbacks suppressed
	[  +9.482407] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.283182] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.904571] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.785327] kauditd_printk_skb: 77 callbacks suppressed
	[Nov26 19:37] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000147] kauditd_printk_skb: 65 callbacks suppressed
	[  +5.189870] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.607761] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.470165] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.001321] kauditd_printk_skb: 22 callbacks suppressed
	[  +1.425793] kauditd_printk_skb: 107 callbacks suppressed
	[  +1.016995] kauditd_printk_skb: 108 callbacks suppressed
	[  +0.850761] kauditd_printk_skb: 172 callbacks suppressed
	[Nov26 19:38] kauditd_printk_skb: 133 callbacks suppressed
	[  +1.395834] kauditd_printk_skb: 48 callbacks suppressed
	[  +0.000045] kauditd_printk_skb: 10 callbacks suppressed
	[ +11.973808] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000091] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.453461] kauditd_printk_skb: 41 callbacks suppressed
	[Nov26 19:40] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [c0bebab37664092b74b019d66b6138297725eedc3ae1f6f34da142e4e366a169] <==
	{"level":"info","ts":"2025-11-26T19:36:51.741174Z","caller":"traceutil/trace.go:172","msg":"trace[1007688485] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5c1cb530b; range_end:; response_count:1; response_revision:1082; }","duration":"323.327378ms","start":"2025-11-26T19:36:51.417785Z","end":"2025-11-26T19:36:51.741112Z","steps":["trace[1007688485] 'agreement among raft nodes before linearized reading'  (duration: 323.175288ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T19:36:51.741202Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-26T19:36:51.417766Z","time spent":"323.42907ms","remote":"127.0.0.1:48074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":922,"request content":"key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5c1cb530b\" limit:1 "}
	{"level":"warn","ts":"2025-11-26T19:36:51.741418Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"264.03278ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T19:36:51.741437Z","caller":"traceutil/trace.go:172","msg":"trace[235327979] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1082; }","duration":"264.052529ms","start":"2025-11-26T19:36:51.477379Z","end":"2025-11-26T19:36:51.741432Z","steps":["trace[235327979] 'agreement among raft nodes before linearized reading'  (duration: 264.016233ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T19:36:51.741557Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.132994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T19:36:51.741594Z","caller":"traceutil/trace.go:172","msg":"trace[2147001097] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1082; }","duration":"146.168737ms","start":"2025-11-26T19:36:51.595420Z","end":"2025-11-26T19:36:51.741589Z","steps":["trace[2147001097] 'agreement among raft nodes before linearized reading'  (duration: 146.127019ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T19:37:05.591379Z","caller":"traceutil/trace.go:172","msg":"trace[1894609465] linearizableReadLoop","detail":"{readStateIndex:1192; appliedIndex:1192; }","duration":"272.066417ms","start":"2025-11-26T19:37:05.319189Z","end":"2025-11-26T19:37:05.591256Z","steps":["trace[1894609465] 'read index received'  (duration: 272.060004ms)","trace[1894609465] 'applied index is now lower than readState.Index'  (duration: 5.495µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-26T19:37:05.591520Z","caller":"traceutil/trace.go:172","msg":"trace[1272142295] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"315.442107ms","start":"2025-11-26T19:37:05.276068Z","end":"2025-11-26T19:37:05.591510Z","steps":["trace[1272142295] 'process raft request'  (duration: 315.226614ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T19:37:05.591868Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.658913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-cjbkr\" limit:1 ","response":"range_response_count:1 size:4885"}
	{"level":"info","ts":"2025-11-26T19:37:05.591922Z","caller":"traceutil/trace.go:172","msg":"trace[1249583433] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-cjbkr; range_end:; response_count:1; response_revision:1159; }","duration":"272.720048ms","start":"2025-11-26T19:37:05.319186Z","end":"2025-11-26T19:37:05.591906Z","steps":["trace[1249583433] 'agreement among raft nodes before linearized reading'  (duration: 272.559972ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T19:37:05.591945Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-26T19:37:05.275973Z","time spent":"315.879185ms","remote":"127.0.0.1:48074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":919,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5a85601c4\" mod_revision:1068 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5a85601c4\" value_size:818 lease:6421740275250318311 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5a85601c4\" > >"}
	{"level":"warn","ts":"2025-11-26T19:37:05.592100Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"235.967541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T19:37:05.592123Z","caller":"traceutil/trace.go:172","msg":"trace[476598973] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1159; }","duration":"235.990347ms","start":"2025-11-26T19:37:05.356125Z","end":"2025-11-26T19:37:05.592116Z","steps":["trace[476598973] 'agreement among raft nodes before linearized reading'  (duration: 235.94434ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T19:37:05.592265Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.704999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T19:37:05.592290Z","caller":"traceutil/trace.go:172","msg":"trace[1363427135] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1159; }","duration":"232.730943ms","start":"2025-11-26T19:37:05.359552Z","end":"2025-11-26T19:37:05.592283Z","steps":["trace[1363427135] 'agreement among raft nodes before linearized reading'  (duration: 232.677734ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T19:37:05.592405Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.907902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T19:37:05.592428Z","caller":"traceutil/trace.go:172","msg":"trace[732722930] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1159; }","duration":"113.93171ms","start":"2025-11-26T19:37:05.478490Z","end":"2025-11-26T19:37:05.592422Z","steps":["trace[732722930] 'agreement among raft nodes before linearized reading'  (duration: 113.890718ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T19:37:44.587834Z","caller":"traceutil/trace.go:172","msg":"trace[1046687004] transaction","detail":"{read_only:false; response_revision:1379; number_of_response:1; }","duration":"229.181041ms","start":"2025-11-26T19:37:44.358573Z","end":"2025-11-26T19:37:44.587754Z","steps":["trace[1046687004] 'process raft request'  (duration: 229.090356ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-26T19:37:46.652164Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"275.878472ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6421740275250319562 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" mod_revision:1396 > success:<request_delete_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" > > failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-11-26T19:37:46.653523Z","caller":"traceutil/trace.go:172","msg":"trace[179650896] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1397; }","duration":"365.411173ms","start":"2025-11-26T19:37:46.288098Z","end":"2025-11-26T19:37:46.653509Z","steps":["trace[179650896] 'process raft request'  (duration: 88.06036ms)","trace[179650896] 'compare'  (duration: 275.691291ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-26T19:37:46.653913Z","caller":"traceutil/trace.go:172","msg":"trace[1200581374] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"287.673652ms","start":"2025-11-26T19:37:46.364684Z","end":"2025-11-26T19:37:46.652358Z","steps":["trace[1200581374] 'process raft request'  (duration: 287.576948ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T19:37:46.654080Z","caller":"traceutil/trace.go:172","msg":"trace[1873023201] linearizableReadLoop","detail":"{readStateIndex:1440; appliedIndex:1439; }","duration":"202.792718ms","start":"2025-11-26T19:37:46.451273Z","end":"2025-11-26T19:37:46.654065Z","steps":["trace[1873023201] 'read index received'  (duration: 196.917087ms)","trace[1873023201] 'applied index is now lower than readState.Index'  (duration: 5.874068ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-26T19:37:46.654871Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-26T19:37:46.288079Z","time spent":"365.64963ms","remote":"127.0.0.1:48312","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" mod_revision:1396 > success:<request_delete_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" > > failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" > >"}
	{"level":"warn","ts":"2025-11-26T19:37:46.654891Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.627757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-26T19:37:46.654955Z","caller":"traceutil/trace.go:172","msg":"trace[1767948582] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1398; }","duration":"203.694123ms","start":"2025-11-26T19:37:46.451250Z","end":"2025-11-26T19:37:46.654944Z","steps":["trace[1767948582] 'agreement among raft nodes before linearized reading'  (duration: 203.487208ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:40:26 up 5 min,  0 users,  load average: 1.45, 1.66, 0.83
	Linux addons-198878 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a81f4d14092622fa6ce01768db00090dc0df11716cb9afc5a98f33684c32db95] <==
	W1126 19:36:22.406549       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:22.433489       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:22.463651       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:36:22.493996       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1126 19:37:31.146399       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:54072: use of closed network connection
	E1126 19:37:31.340228       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:54098: use of closed network connection
	I1126 19:37:40.828920       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.46.176"}
	I1126 19:37:58.927209       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1126 19:37:59.125631       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.159.75"}
	I1126 19:38:16.812734       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1126 19:38:27.326580       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1126 19:38:43.962015       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1126 19:38:43.962080       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1126 19:38:44.008803       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1126 19:38:44.008997       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1126 19:38:44.020692       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1126 19:38:44.022428       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1126 19:38:44.035413       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1126 19:38:44.035500       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1126 19:38:44.072883       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1126 19:38:44.072911       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1126 19:38:45.021167       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1126 19:38:45.076377       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1126 19:38:45.108956       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1126 19:40:25.203386       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.157.132"}
	
	
	==> kube-controller-manager [5be95f29d66ce013ac18e78d2b0377cc9ae86b14f1ef875c74630026b4a5651f] <==
	E1126 19:38:54.125961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:38:54.215262       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:38:54.216553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:01.336366       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:01.337624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:02.725192       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:02.726402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:06.552147       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:06.553126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:17.540525       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:17.541508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:17.569449       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:17.570621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:19.917762       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:19.919000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:43.816572       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:43.817670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:52.278380       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:52.279667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:39:59.481715       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:39:59.482982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:40:16.608635       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:40:16.609775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1126 19:40:23.806495       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1126 19:40:23.808124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [d1122a8f344ffeb1984b48a8747125afcf335fc5a63852549889a34e589dbdd4] <==
	I1126 19:35:55.182556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 19:35:55.283149       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:35:55.284422       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.123"]
	E1126 19:35:55.285000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:35:55.694536       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1126 19:35:55.694748       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1126 19:35:55.694776       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:35:55.712783       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:35:55.714579       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:35:55.716073       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:35:55.742097       1 config.go:200] "Starting service config controller"
	I1126 19:35:55.742136       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:35:55.745083       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:35:55.745121       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:35:55.745383       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:35:55.745410       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:35:55.750732       1 config.go:309] "Starting node config controller"
	I1126 19:35:55.750764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:35:55.750771       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:35:55.843123       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 19:35:55.845394       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 19:35:55.845826       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [039227d5c32668d7a544e8dab27d32bdcf7a9493668abe7212bccfe5a90a90ad] <==
	E1126 19:35:45.453906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:35:45.454008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:35:45.454095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:35:45.454435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 19:35:45.454532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:35:45.455439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:35:45.461560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:35:45.461655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 19:35:46.305266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 19:35:46.338366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:35:46.349551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:35:46.407909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:35:46.425909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:35:46.452843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 19:35:46.464245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:35:46.477008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:35:46.548444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:35:46.575421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 19:35:46.645344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:35:46.701189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:35:46.732095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:35:46.789168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 19:35:46.833947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:35:46.933536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1126 19:35:49.231943       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 19:38:48 addons-198878 kubelet[1502]: E1126 19:38:48.543452    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185928542529210  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:38:48 addons-198878 kubelet[1502]: E1126 19:38:48.543541    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185928542529210  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:38:49 addons-198878 kubelet[1502]: I1126 19:38:49.429382    1502 scope.go:117] "RemoveContainer" containerID="ad378d4480d0f0322e2566cc4e18f336840698be845a75feb11dba46fa939cf0"
	Nov 26 19:38:49 addons-198878 kubelet[1502]: I1126 19:38:49.552905    1502 scope.go:117] "RemoveContainer" containerID="161d0d203b8ccba0beae02de17c2d8098e2d82c463e97f5c1520cd938a346ef4"
	Nov 26 19:38:58 addons-198878 kubelet[1502]: E1126 19:38:58.547405    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185938546137515  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:38:58 addons-198878 kubelet[1502]: E1126 19:38:58.547436    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185938546137515  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:08 addons-198878 kubelet[1502]: E1126 19:39:08.550388    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185948549778808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:08 addons-198878 kubelet[1502]: E1126 19:39:08.550437    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185948549778808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:18 addons-198878 kubelet[1502]: E1126 19:39:18.553894    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185958553082825  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:18 addons-198878 kubelet[1502]: E1126 19:39:18.553927    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185958553082825  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:28 addons-198878 kubelet[1502]: E1126 19:39:28.557085    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185968556739123  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:28 addons-198878 kubelet[1502]: E1126 19:39:28.557109    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185968556739123  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:38 addons-198878 kubelet[1502]: E1126 19:39:38.563029    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185978560693859  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:38 addons-198878 kubelet[1502]: E1126 19:39:38.563653    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185978560693859  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:48 addons-198878 kubelet[1502]: E1126 19:39:48.566684    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185988566054156  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:48 addons-198878 kubelet[1502]: E1126 19:39:48.566718    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185988566054156  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:58 addons-198878 kubelet[1502]: I1126 19:39:58.266875    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zt7pv" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:39:58 addons-198878 kubelet[1502]: E1126 19:39:58.570348    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185998569798236  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:39:58 addons-198878 kubelet[1502]: E1126 19:39:58.570371    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185998569798236  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:40:08 addons-198878 kubelet[1502]: E1126 19:40:08.573026    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764186008572425379  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:40:08 addons-198878 kubelet[1502]: E1126 19:40:08.573083    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764186008572425379  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:40:09 addons-198878 kubelet[1502]: I1126 19:40:09.266060    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:40:18 addons-198878 kubelet[1502]: E1126 19:40:18.576451    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764186018575972021  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:40:18 addons-198878 kubelet[1502]: E1126 19:40:18.576495    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764186018575972021  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 26 19:40:25 addons-198878 kubelet[1502]: I1126 19:40:25.234019    1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j6jh\" (UniqueName: \"kubernetes.io/projected/f3b2ee9c-5d99-4c4f-b718-3209c64f7159-kube-api-access-6j6jh\") pod \"hello-world-app-5d498dc89-tkxwx\" (UID: \"f3b2ee9c-5d99-4c4f-b718-3209c64f7159\") " pod="default/hello-world-app-5d498dc89-tkxwx"
	
	
	==> storage-provisioner [6560862725d64d696c066e82421c3884ddf391f1b99d75d5dd1cd0f58c1b7171] <==
	W1126 19:40:01.628095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:03.632368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:03.640272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:05.643675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:05.648846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:07.652492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:07.660807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:09.664062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:09.669929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:11.673933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:11.680845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:13.684982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:13.691447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:15.695241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:15.703148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:17.706744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:17.711504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:19.715568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:19.723397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:21.728582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:21.735911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:23.741847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:23.749906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:25.753465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:25.760411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-198878 -n addons-198878
helpers_test.go:269: (dbg) Run:  kubectl --context addons-198878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-7hjrv ingress-nginx-admission-patch-cjbkr
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-198878 describe pod ingress-nginx-admission-create-7hjrv ingress-nginx-admission-patch-cjbkr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-198878 describe pod ingress-nginx-admission-create-7hjrv ingress-nginx-admission-patch-cjbkr: exit status 1 (59.837946ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7hjrv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cjbkr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-198878 describe pod ingress-nginx-admission-create-7hjrv ingress-nginx-admission-patch-cjbkr: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 addons disable ingress-dns --alsologtostderr -v=1: (1.347924462s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 addons disable ingress --alsologtostderr -v=1: (7.741804471s)
--- FAIL: TestAddons/parallel/Ingress (158.06s)

                                                
                                    
x
+
TestPreload (150.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-627885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1126 20:25:48.493679   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-627885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m31.937528634s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-627885 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-627885 image pull gcr.io/k8s-minikube/busybox: (2.438553013s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-627885
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-627885: (6.929516777s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-627885 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1126 20:27:21.020458   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-627885 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (46.295632967s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-627885 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-11-26 20:27:49.275310645 +0000 UTC m=+3176.720931987
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-627885 -n test-preload-627885
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-627885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-627885 logs -n 25: (1.030966547s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-230981 ssh -n multinode-230981-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:14 UTC │
	│ ssh     │ multinode-230981 ssh -n multinode-230981 sudo cat /home/docker/cp-test_multinode-230981-m03_multinode-230981.txt                                          │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:14 UTC │
	│ cp      │ multinode-230981 cp multinode-230981-m03:/home/docker/cp-test.txt multinode-230981-m02:/home/docker/cp-test_multinode-230981-m03_multinode-230981-m02.txt │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:14 UTC │
	│ ssh     │ multinode-230981 ssh -n multinode-230981-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:14 UTC │
	│ ssh     │ multinode-230981 ssh -n multinode-230981-m02 sudo cat /home/docker/cp-test_multinode-230981-m03_multinode-230981-m02.txt                                  │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:14 UTC │
	│ node    │ multinode-230981 node stop m03                                                                                                                            │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:14 UTC │
	│ node    │ multinode-230981 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:15 UTC │
	│ node    │ list -p multinode-230981                                                                                                                                  │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │                     │
	│ stop    │ -p multinode-230981                                                                                                                                       │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:15 UTC │ 26 Nov 25 20:17 UTC │
	│ start   │ -p multinode-230981 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:17 UTC │ 26 Nov 25 20:20 UTC │
	│ node    │ list -p multinode-230981                                                                                                                                  │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │                     │
	│ node    │ multinode-230981 node delete m03                                                                                                                          │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:20 UTC │
	│ stop    │ multinode-230981 stop                                                                                                                                     │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:20 UTC │ 26 Nov 25 20:23 UTC │
	│ start   │ -p multinode-230981 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:23 UTC │ 26 Nov 25 20:24 UTC │
	│ node    │ list -p multinode-230981                                                                                                                                  │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ start   │ -p multinode-230981-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-230981-m02 │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │                     │
	│ start   │ -p multinode-230981-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-230981-m03 │ jenkins │ v1.37.0 │ 26 Nov 25 20:24 UTC │ 26 Nov 25 20:25 UTC │
	│ node    │ add -p multinode-230981                                                                                                                                   │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:25 UTC │                     │
	│ delete  │ -p multinode-230981-m03                                                                                                                                   │ multinode-230981-m03 │ jenkins │ v1.37.0 │ 26 Nov 25 20:25 UTC │ 26 Nov 25 20:25 UTC │
	│ delete  │ -p multinode-230981                                                                                                                                       │ multinode-230981     │ jenkins │ v1.37.0 │ 26 Nov 25 20:25 UTC │ 26 Nov 25 20:25 UTC │
	│ start   │ -p test-preload-627885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-627885  │ jenkins │ v1.37.0 │ 26 Nov 25 20:25 UTC │ 26 Nov 25 20:26 UTC │
	│ image   │ test-preload-627885 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-627885  │ jenkins │ v1.37.0 │ 26 Nov 25 20:26 UTC │ 26 Nov 25 20:26 UTC │
	│ stop    │ -p test-preload-627885                                                                                                                                    │ test-preload-627885  │ jenkins │ v1.37.0 │ 26 Nov 25 20:26 UTC │ 26 Nov 25 20:27 UTC │
	│ start   │ -p test-preload-627885 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-627885  │ jenkins │ v1.37.0 │ 26 Nov 25 20:27 UTC │ 26 Nov 25 20:27 UTC │
	│ image   │ test-preload-627885 image list                                                                                                                            │ test-preload-627885  │ jenkins │ v1.37.0 │ 26 Nov 25 20:27 UTC │ 26 Nov 25 20:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:27:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:27:02.840827   33898 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:27:02.840943   33898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:27:02.840950   33898 out.go:374] Setting ErrFile to fd 2...
	I1126 20:27:02.840957   33898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:27:02.841208   33898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 20:27:02.841666   33898 out.go:368] Setting JSON to false
	I1126 20:27:02.842627   33898 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4173,"bootTime":1764184650,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:27:02.842684   33898 start.go:143] virtualization: kvm guest
	I1126 20:27:02.844704   33898 out.go:179] * [test-preload-627885] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:27:02.846217   33898 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:27:02.846258   33898 notify.go:221] Checking for updates...
	I1126 20:27:02.848431   33898 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:27:02.849420   33898 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 20:27:02.850415   33898 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 20:27:02.851500   33898 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:27:02.852775   33898 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:27:02.854339   33898 config.go:182] Loaded profile config "test-preload-627885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:27:02.855015   33898 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:27:02.888703   33898 out.go:179] * Using the kvm2 driver based on existing profile
	I1126 20:27:02.889717   33898 start.go:309] selected driver: kvm2
	I1126 20:27:02.889727   33898 start.go:927] validating driver "kvm2" against &{Name:test-preload-627885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.1 ClusterName:test-preload-627885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:27:02.889817   33898 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:27:02.890743   33898 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:27:02.890774   33898 cni.go:84] Creating CNI manager for ""
	I1126 20:27:02.890821   33898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1126 20:27:02.890864   33898 start.go:353] cluster config:
	{Name:test-preload-627885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:test-preload-627885 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:27:02.890946   33898 iso.go:125] acquiring lock: {Name:mkfe3dbb7c1a56d5a5080a4e71d079899ad19ff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:27:02.892341   33898 out.go:179] * Starting "test-preload-627885" primary control-plane node in "test-preload-627885" cluster
	I1126 20:27:02.893304   33898 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:27:02.893326   33898 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1126 20:27:02.893332   33898 cache.go:65] Caching tarball of preloaded images
	I1126 20:27:02.893409   33898 preload.go:238] Found /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1126 20:27:02.893420   33898 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:27:02.893502   33898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/config.json ...
	I1126 20:27:02.893673   33898 start.go:360] acquireMachinesLock for test-preload-627885: {Name:mk682108a3404f6d853d2e6b676abccdb6a57902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1126 20:27:02.893714   33898 start.go:364] duration metric: took 26.317µs to acquireMachinesLock for "test-preload-627885"
	I1126 20:27:02.893728   33898 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:27:02.893733   33898 fix.go:54] fixHost starting: 
	I1126 20:27:02.895227   33898 fix.go:112] recreateIfNeeded on test-preload-627885: state=Stopped err=<nil>
	W1126 20:27:02.895246   33898 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:27:02.896692   33898 out.go:252] * Restarting existing kvm2 VM for "test-preload-627885" ...
	I1126 20:27:02.896728   33898 main.go:143] libmachine: starting domain...
	I1126 20:27:02.896736   33898 main.go:143] libmachine: ensuring networks are active...
	I1126 20:27:02.897703   33898 main.go:143] libmachine: Ensuring network default is active
	I1126 20:27:02.898044   33898 main.go:143] libmachine: Ensuring network mk-test-preload-627885 is active
	I1126 20:27:02.898464   33898 main.go:143] libmachine: getting domain XML...
	I1126 20:27:02.899622   33898 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-627885</name>
	  <uuid>65816bc4-9295-4881-9973-778935fcb046</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/test-preload-627885/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/test-preload-627885/test-preload-627885.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f8:56:81'/>
	      <source network='mk-test-preload-627885'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:37:e7:6e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1126 20:27:04.163503   33898 main.go:143] libmachine: waiting for domain to start...
	I1126 20:27:04.164946   33898 main.go:143] libmachine: domain is now running
	I1126 20:27:04.164967   33898 main.go:143] libmachine: waiting for IP...
	I1126 20:27:04.166053   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:04.166651   33898 main.go:143] libmachine: domain test-preload-627885 has current primary IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:04.166671   33898 main.go:143] libmachine: found domain IP: 192.168.39.3
	I1126 20:27:04.166680   33898 main.go:143] libmachine: reserving static IP address...
	I1126 20:27:04.167134   33898 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-627885", mac: "52:54:00:f8:56:81", ip: "192.168.39.3"} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:25:37 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:04.167169   33898 main.go:143] libmachine: skip adding static IP to network mk-test-preload-627885 - found existing host DHCP lease matching {name: "test-preload-627885", mac: "52:54:00:f8:56:81", ip: "192.168.39.3"}
	I1126 20:27:04.167186   33898 main.go:143] libmachine: reserved static IP address 192.168.39.3 for domain test-preload-627885
	I1126 20:27:04.167197   33898 main.go:143] libmachine: waiting for SSH...
	I1126 20:27:04.167206   33898 main.go:143] libmachine: Getting to WaitForSSH function...
	I1126 20:27:04.169589   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:04.169935   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:25:37 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:04.169964   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:04.170155   33898 main.go:143] libmachine: Using SSH client type: native
	I1126 20:27:04.170483   33898 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1126 20:27:04.170498   33898 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1126 20:27:07.249359   33898 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1126 20:27:13.329458   33898 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1126 20:27:16.331984   33898 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: connection refused
	I1126 20:27:19.442833   33898 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:27:19.446239   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.446675   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:19.446699   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.446938   33898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/config.json ...
	I1126 20:27:19.447186   33898 machine.go:94] provisionDockerMachine start ...
	I1126 20:27:19.449212   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.449540   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:19.449563   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.449717   33898 main.go:143] libmachine: Using SSH client type: native
	I1126 20:27:19.449900   33898 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1126 20:27:19.449920   33898 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:27:19.564604   33898 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1126 20:27:19.564642   33898 buildroot.go:166] provisioning hostname "test-preload-627885"
	I1126 20:27:19.567607   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.568011   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:19.568034   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.568230   33898 main.go:143] libmachine: Using SSH client type: native
	I1126 20:27:19.568426   33898 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1126 20:27:19.568437   33898 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-627885 && echo "test-preload-627885" | sudo tee /etc/hostname
	I1126 20:27:19.697856   33898 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-627885
	
	I1126 20:27:19.700786   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.701165   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:19.701193   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.701336   33898 main.go:143] libmachine: Using SSH client type: native
	I1126 20:27:19.701523   33898 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1126 20:27:19.701538   33898 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-627885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-627885/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-627885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:27:19.823032   33898 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:27:19.823059   33898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21974-7091/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-7091/.minikube}
	I1126 20:27:19.823109   33898 buildroot.go:174] setting up certificates
	I1126 20:27:19.823119   33898 provision.go:84] configureAuth start
	I1126 20:27:19.825893   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.826267   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:19.826291   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.828347   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.828655   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:19.828672   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.828802   33898 provision.go:143] copyHostCerts
	I1126 20:27:19.828855   33898 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-7091/.minikube/cert.pem, removing ...
	I1126 20:27:19.828865   33898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-7091/.minikube/cert.pem
	I1126 20:27:19.828928   33898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/cert.pem (1123 bytes)
	I1126 20:27:19.829021   33898 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-7091/.minikube/key.pem, removing ...
	I1126 20:27:19.829030   33898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-7091/.minikube/key.pem
	I1126 20:27:19.829063   33898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/key.pem (1675 bytes)
	I1126 20:27:19.829131   33898 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-7091/.minikube/ca.pem, removing ...
	I1126 20:27:19.829141   33898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-7091/.minikube/ca.pem
	I1126 20:27:19.829164   33898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/ca.pem (1082 bytes)
	I1126 20:27:19.829214   33898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem org=jenkins.test-preload-627885 san=[127.0.0.1 192.168.39.3 localhost minikube test-preload-627885]
	I1126 20:27:19.933847   33898 provision.go:177] copyRemoteCerts
	I1126 20:27:19.933898   33898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:27:19.936841   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.937680   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:19.937727   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:19.937923   33898 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/test-preload-627885/id_rsa Username:docker}
	I1126 20:27:20.025632   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1126 20:27:20.058173   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:27:20.090502   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1126 20:27:20.124157   33898 provision.go:87] duration metric: took 301.024224ms to configureAuth
	I1126 20:27:20.124196   33898 buildroot.go:189] setting minikube options for container-runtime
	I1126 20:27:20.124430   33898 config.go:182] Loaded profile config "test-preload-627885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:27:20.127595   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.128103   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:20.128134   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.128309   33898 main.go:143] libmachine: Using SSH client type: native
	I1126 20:27:20.128503   33898 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1126 20:27:20.128516   33898 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:27:20.383025   33898 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:27:20.383052   33898 machine.go:97] duration metric: took 935.85341ms to provisionDockerMachine
	I1126 20:27:20.383076   33898 start.go:293] postStartSetup for "test-preload-627885" (driver="kvm2")
	I1126 20:27:20.383106   33898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:27:20.383180   33898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:27:20.386254   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.386684   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:20.386707   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.386859   33898 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/test-preload-627885/id_rsa Username:docker}
	I1126 20:27:20.481187   33898 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:27:20.486791   33898 info.go:137] Remote host: Buildroot 2025.02
	I1126 20:27:20.486826   33898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-7091/.minikube/addons for local assets ...
	I1126 20:27:20.486899   33898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-7091/.minikube/files for local assets ...
	I1126 20:27:20.486997   33898 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-7091/.minikube/files/etc/ssl/certs/110032.pem -> 110032.pem in /etc/ssl/certs
	I1126 20:27:20.487113   33898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:27:20.503912   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/files/etc/ssl/certs/110032.pem --> /etc/ssl/certs/110032.pem (1708 bytes)
	I1126 20:27:20.542431   33898 start.go:296] duration metric: took 159.34176ms for postStartSetup
	I1126 20:27:20.542464   33898 fix.go:56] duration metric: took 17.648731033s for fixHost
	I1126 20:27:20.545182   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.545509   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:20.545528   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.545697   33898 main.go:143] libmachine: Using SSH client type: native
	I1126 20:27:20.545885   33898 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1126 20:27:20.545899   33898 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1126 20:27:20.656195   33898 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764188840.621256872
	
	I1126 20:27:20.656223   33898 fix.go:216] guest clock: 1764188840.621256872
	I1126 20:27:20.656232   33898 fix.go:229] Guest: 2025-11-26 20:27:20.621256872 +0000 UTC Remote: 2025-11-26 20:27:20.542467687 +0000 UTC m=+17.751058820 (delta=78.789185ms)
	I1126 20:27:20.656254   33898 fix.go:200] guest clock delta is within tolerance: 78.789185ms
	I1126 20:27:20.656264   33898 start.go:83] releasing machines lock for "test-preload-627885", held for 17.762540255s
	I1126 20:27:20.659254   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.659617   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:20.659637   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.660199   33898 ssh_runner.go:195] Run: cat /version.json
	I1126 20:27:20.660254   33898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:27:20.663232   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.663529   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.663603   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:20.663632   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.663773   33898 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/test-preload-627885/id_rsa Username:docker}
	I1126 20:27:20.663972   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:20.663999   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:20.664203   33898 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/test-preload-627885/id_rsa Username:docker}
	I1126 20:27:20.743160   33898 ssh_runner.go:195] Run: systemctl --version
	I1126 20:27:20.769288   33898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:27:20.916890   33898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:27:20.931943   33898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:27:20.932009   33898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:27:20.954097   33898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1126 20:27:20.954120   33898 start.go:496] detecting cgroup driver to use...
	I1126 20:27:20.954196   33898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:27:20.975123   33898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:27:20.992949   33898 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:27:20.993005   33898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:27:21.012039   33898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:27:21.029276   33898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:27:21.182061   33898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:27:21.409903   33898 docker.go:234] disabling docker service ...
	I1126 20:27:21.409984   33898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:27:21.428079   33898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:27:21.444996   33898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:27:21.603527   33898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:27:21.747919   33898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:27:21.764934   33898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:27:21.789510   33898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:27:21.789578   33898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:27:21.802892   33898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:27:21.802965   33898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:27:21.816662   33898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:27:21.830314   33898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:27:21.843426   33898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:27:21.857243   33898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:27:21.870022   33898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:27:21.892401   33898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:27:21.905566   33898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:27:21.917114   33898 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1126 20:27:21.917183   33898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1126 20:27:21.938365   33898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:27:21.950247   33898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:27:22.094809   33898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:27:22.222343   33898 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:27:22.222415   33898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:27:22.228530   33898 start.go:564] Will wait 60s for crictl version
	I1126 20:27:22.228584   33898 ssh_runner.go:195] Run: which crictl
	I1126 20:27:22.233113   33898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1126 20:27:22.270589   33898 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1126 20:27:22.270690   33898 ssh_runner.go:195] Run: crio --version
	I1126 20:27:22.302203   33898 ssh_runner.go:195] Run: crio --version
	I1126 20:27:22.335372   33898 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1126 20:27:22.339292   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:22.339749   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:22.339771   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:22.340007   33898 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1126 20:27:22.344939   33898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:27:22.360906   33898 kubeadm.go:884] updating cluster {Name:test-preload-627885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.1 ClusterName:test-preload-627885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:27:22.361022   33898 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:27:22.361111   33898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:27:22.396453   33898 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1126 20:27:22.396529   33898 ssh_runner.go:195] Run: which lz4
	I1126 20:27:22.401285   33898 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1126 20:27:22.406896   33898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1126 20:27:22.406932   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1126 20:27:24.043444   33898 crio.go:462] duration metric: took 1.64219112s to copy over tarball
	I1126 20:27:24.043519   33898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1126 20:27:25.684219   33898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.640672068s)
	I1126 20:27:25.684248   33898 crio.go:469] duration metric: took 1.640773041s to extract the tarball
	I1126 20:27:25.684257   33898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1126 20:27:25.732247   33898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:27:25.771187   33898 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:27:25.771215   33898 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:27:25.771226   33898 kubeadm.go:935] updating node { 192.168.39.3 8443 v1.34.1 crio true true} ...
	I1126 20:27:25.771342   33898 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-627885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:test-preload-627885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:27:25.771407   33898 ssh_runner.go:195] Run: crio config
	I1126 20:27:25.822664   33898 cni.go:84] Creating CNI manager for ""
	I1126 20:27:25.822688   33898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1126 20:27:25.822703   33898 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:27:25.822722   33898 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-627885 NodeName:test-preload-627885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:27:25.822851   33898 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-627885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.3"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:27:25.822932   33898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:27:25.836645   33898 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:27:25.836713   33898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:27:25.849644   33898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1126 20:27:25.871157   33898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:27:25.892558   33898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1126 20:27:25.916052   33898 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I1126 20:27:25.920524   33898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:27:25.936020   33898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:27:26.079673   33898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:27:26.101559   33898 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885 for IP: 192.168.39.3
	I1126 20:27:26.101584   33898 certs.go:195] generating shared ca certs ...
	I1126 20:27:26.101606   33898 certs.go:227] acquiring lock for ca certs: {Name:mkec6f6093be68a4f0c7d5c64487ef4e93539f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:27:26.101805   33898 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key
	I1126 20:27:26.101896   33898 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key
	I1126 20:27:26.101919   33898 certs.go:257] generating profile certs ...
	I1126 20:27:26.102046   33898 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/client.key
	I1126 20:27:26.102160   33898 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/apiserver.key.c58ed8ca
	I1126 20:27:26.102234   33898 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/proxy-client.key
	I1126 20:27:26.102403   33898 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/11003.pem (1338 bytes)
	W1126 20:27:26.102447   33898 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-7091/.minikube/certs/11003_empty.pem, impossibly tiny 0 bytes
	I1126 20:27:26.102461   33898 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:27:26.102504   33898 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem (1082 bytes)
	I1126 20:27:26.102559   33898 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:27:26.102611   33898 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem (1675 bytes)
	I1126 20:27:26.102674   33898 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/files/etc/ssl/certs/110032.pem (1708 bytes)
	I1126 20:27:26.103580   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:27:26.141782   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1126 20:27:26.184207   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:27:26.215699   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:27:26.249196   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1126 20:27:26.282649   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:27:26.314841   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:27:26.348904   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:27:26.383603   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/files/etc/ssl/certs/110032.pem --> /usr/share/ca-certificates/110032.pem (1708 bytes)
	I1126 20:27:26.415807   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:27:26.452280   33898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/certs/11003.pem --> /usr/share/ca-certificates/11003.pem (1338 bytes)
	I1126 20:27:26.482994   33898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:27:26.507406   33898 ssh_runner.go:195] Run: openssl version
	I1126 20:27:26.514801   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110032.pem && ln -fs /usr/share/ca-certificates/110032.pem /etc/ssl/certs/110032.pem"
	I1126 20:27:26.530037   33898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110032.pem
	I1126 20:27:26.536198   33898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:42 /usr/share/ca-certificates/110032.pem
	I1126 20:27:26.536267   33898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110032.pem
	I1126 20:27:26.544302   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110032.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:27:26.558885   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:27:26.573339   33898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:27:26.578784   33898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:27:26.578855   33898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:27:26.586395   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:27:26.600575   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11003.pem && ln -fs /usr/share/ca-certificates/11003.pem /etc/ssl/certs/11003.pem"
	I1126 20:27:26.614831   33898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11003.pem
	I1126 20:27:26.620561   33898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:42 /usr/share/ca-certificates/11003.pem
	I1126 20:27:26.620620   33898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11003.pem
	I1126 20:27:26.628228   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11003.pem /etc/ssl/certs/51391683.0"
	I1126 20:27:26.641604   33898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:27:26.647327   33898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:27:26.655136   33898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:27:26.665163   33898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:27:26.673471   33898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:27:26.681590   33898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:27:26.689463   33898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:27:26.697186   33898 kubeadm.go:401] StartCluster: {Name:test-preload-627885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.1 ClusterName:test-preload-627885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:27:26.697267   33898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:27:26.697334   33898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:27:26.731926   33898 cri.go:89] found id: ""
	I1126 20:27:26.731995   33898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:27:26.744706   33898 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:27:26.744731   33898 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:27:26.744790   33898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:27:26.756647   33898 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:27:26.757040   33898 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-627885" does not appear in /home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 20:27:26.757188   33898 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-7091/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-627885" cluster setting kubeconfig missing "test-preload-627885" context setting]
	I1126 20:27:26.757448   33898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/kubeconfig: {Name:mk17b8b187372462ddf3f30b5296315dcdc9fda2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:27:26.757919   33898 kapi.go:59] client config for test-preload-627885: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/client.key", CAFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:27:26.758326   33898 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:27:26.758342   33898 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:27:26.758350   33898 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:27:26.758357   33898 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:27:26.758362   33898 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:27:26.758625   33898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:27:26.770345   33898 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.3
	I1126 20:27:26.770374   33898 kubeadm.go:1161] stopping kube-system containers ...
	I1126 20:27:26.770384   33898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1126 20:27:26.770430   33898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:27:26.808302   33898 cri.go:89] found id: ""
	I1126 20:27:26.808388   33898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1126 20:27:26.833208   33898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:27:26.845237   33898 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:27:26.845274   33898 kubeadm.go:158] found existing configuration files:
	
	I1126 20:27:26.845318   33898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:27:26.856500   33898 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:27:26.856566   33898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:27:26.868799   33898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:27:26.879904   33898 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:27:26.879983   33898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:27:26.892180   33898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:27:26.903073   33898 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:27:26.903159   33898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:27:26.915180   33898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:27:26.926541   33898 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:27:26.926603   33898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:27:26.938146   33898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:27:26.950507   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1126 20:27:27.009204   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1126 20:27:27.735916   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1126 20:27:28.003834   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1126 20:27:28.084597   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1126 20:27:28.171216   33898 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:27:28.171310   33898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:27:28.671548   33898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:27:29.171588   33898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:27:29.671493   33898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:27:30.171950   33898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:27:30.671775   33898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:27:30.708283   33898 api_server.go:72] duration metric: took 2.537081015s to wait for apiserver process to appear ...
	I1126 20:27:30.708314   33898 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:27:30.708337   33898 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1126 20:27:33.453960   33898 api_server.go:279] https://192.168.39.3:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 20:27:33.453986   33898 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 20:27:33.454002   33898 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1126 20:27:33.487378   33898 api_server.go:279] https://192.168.39.3:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 20:27:33.487416   33898 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 20:27:33.708900   33898 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1126 20:27:33.715602   33898 api_server.go:279] https://192.168.39.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:27:33.715633   33898 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:27:34.209379   33898 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1126 20:27:34.239009   33898 api_server.go:279] https://192.168.39.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:27:34.239042   33898 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:27:34.708549   33898 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1126 20:27:34.724899   33898 api_server.go:279] https://192.168.39.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:27:34.724937   33898 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:27:35.208545   33898 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1126 20:27:35.213435   33898 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I1126 20:27:35.220215   33898 api_server.go:141] control plane version: v1.34.1
	I1126 20:27:35.220241   33898 api_server.go:131] duration metric: took 4.511920688s to wait for apiserver health ...
	I1126 20:27:35.220252   33898 cni.go:84] Creating CNI manager for ""
	I1126 20:27:35.220260   33898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1126 20:27:35.222181   33898 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1126 20:27:35.223797   33898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1126 20:27:35.239882   33898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1126 20:27:35.288447   33898 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:27:35.299791   33898 system_pods.go:59] 7 kube-system pods found
	I1126 20:27:35.299834   33898 system_pods.go:61] "coredns-66bc5c9577-gtrnz" [6cb6d618-58f0-4786-87c2-a5a77f2d810a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:27:35.299843   33898 system_pods.go:61] "etcd-test-preload-627885" [ab588f93-be96-4e82-8337-679999bd6eba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:27:35.299855   33898 system_pods.go:61] "kube-apiserver-test-preload-627885" [5c7b25d5-a6dc-4b42-ac39-45825868aa7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:27:35.299860   33898 system_pods.go:61] "kube-controller-manager-test-preload-627885" [b5d8d2e4-fd46-4927-a3db-2a7e1a639ad4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:27:35.299864   33898 system_pods.go:61] "kube-proxy-rsghz" [b970a468-9ec9-4332-8e11-c561f8ebf03e] Running
	I1126 20:27:35.299869   33898 system_pods.go:61] "kube-scheduler-test-preload-627885" [18c1bc16-c874-4d3e-a293-73e8a37b43b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:27:35.299874   33898 system_pods.go:61] "storage-provisioner" [50240f3b-4247-4be5-96c6-f2264f259b11] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:27:35.299880   33898 system_pods.go:74] duration metric: took 11.411738ms to wait for pod list to return data ...
	I1126 20:27:35.299886   33898 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:27:35.306747   33898 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1126 20:27:35.306778   33898 node_conditions.go:123] node cpu capacity is 2
	I1126 20:27:35.306796   33898 node_conditions.go:105] duration metric: took 6.905093ms to run NodePressure ...
	I1126 20:27:35.306859   33898 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1126 20:27:35.593191   33898 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1126 20:27:35.599643   33898 kubeadm.go:744] kubelet initialised
	I1126 20:27:35.599666   33898 kubeadm.go:745] duration metric: took 6.450068ms waiting for restarted kubelet to initialise ...
	I1126 20:27:35.599680   33898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:27:35.620979   33898 ops.go:34] apiserver oom_adj: -16
	I1126 20:27:35.621004   33898 kubeadm.go:602] duration metric: took 8.876266581s to restartPrimaryControlPlane
	I1126 20:27:35.621013   33898 kubeadm.go:403] duration metric: took 8.923836623s to StartCluster
	I1126 20:27:35.621028   33898 settings.go:142] acquiring lock: {Name:mk37c98b12b8a7193cfde69315430fb7cd818f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:27:35.621117   33898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 20:27:35.621621   33898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/kubeconfig: {Name:mk17b8b187372462ddf3f30b5296315dcdc9fda2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:27:35.621844   33898 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:27:35.621911   33898 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:27:35.622009   33898 addons.go:70] Setting storage-provisioner=true in profile "test-preload-627885"
	I1126 20:27:35.622031   33898 addons.go:70] Setting default-storageclass=true in profile "test-preload-627885"
	I1126 20:27:35.622035   33898 addons.go:239] Setting addon storage-provisioner=true in "test-preload-627885"
	W1126 20:27:35.622045   33898 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:27:35.622046   33898 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-627885"
	I1126 20:27:35.622076   33898 host.go:66] Checking if "test-preload-627885" exists ...
	I1126 20:27:35.622101   33898 config.go:182] Loaded profile config "test-preload-627885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:27:35.624428   33898 kapi.go:59] client config for test-preload-627885: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/client.key", CAFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:27:35.624688   33898 addons.go:239] Setting addon default-storageclass=true in "test-preload-627885"
	W1126 20:27:35.624704   33898 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:27:35.624721   33898 host.go:66] Checking if "test-preload-627885" exists ...
	I1126 20:27:35.625220   33898 out.go:179] * Verifying Kubernetes components...
	I1126 20:27:35.626061   33898 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:27:35.626098   33898 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:27:35.626452   33898 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:27:35.626905   33898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:27:35.627779   33898 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:27:35.627804   33898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:27:35.629253   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:35.629640   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:35.629673   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:35.629832   33898 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/test-preload-627885/id_rsa Username:docker}
	I1126 20:27:35.630494   33898 main.go:143] libmachine: domain test-preload-627885 has defined MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:35.630876   33898 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:81", ip: ""} in network mk-test-preload-627885: {Iface:virbr1 ExpiryTime:2025-11-26 21:27:15 +0000 UTC Type:0 Mac:52:54:00:f8:56:81 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:test-preload-627885 Clientid:01:52:54:00:f8:56:81}
	I1126 20:27:35.630903   33898 main.go:143] libmachine: domain test-preload-627885 has defined IP address 192.168.39.3 and MAC address 52:54:00:f8:56:81 in network mk-test-preload-627885
	I1126 20:27:35.631091   33898 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/test-preload-627885/id_rsa Username:docker}
	I1126 20:27:35.868072   33898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:27:35.893142   33898 node_ready.go:35] waiting up to 6m0s for node "test-preload-627885" to be "Ready" ...
	I1126 20:27:35.896480   33898 node_ready.go:49] node "test-preload-627885" is "Ready"
	I1126 20:27:35.896502   33898 node_ready.go:38] duration metric: took 3.310026ms for node "test-preload-627885" to be "Ready" ...
	I1126 20:27:35.896513   33898 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:27:35.896565   33898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:27:35.915952   33898 api_server.go:72] duration metric: took 294.073669ms to wait for apiserver process to appear ...
	I1126 20:27:35.915989   33898 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:27:35.916005   33898 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1126 20:27:35.923191   33898 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I1126 20:27:35.924189   33898 api_server.go:141] control plane version: v1.34.1
	I1126 20:27:35.924209   33898 api_server.go:131] duration metric: took 8.214122ms to wait for apiserver health ...
	I1126 20:27:35.924220   33898 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:27:35.929502   33898 system_pods.go:59] 7 kube-system pods found
	I1126 20:27:35.929526   33898 system_pods.go:61] "coredns-66bc5c9577-gtrnz" [6cb6d618-58f0-4786-87c2-a5a77f2d810a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:27:35.929533   33898 system_pods.go:61] "etcd-test-preload-627885" [ab588f93-be96-4e82-8337-679999bd6eba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:27:35.929541   33898 system_pods.go:61] "kube-apiserver-test-preload-627885" [5c7b25d5-a6dc-4b42-ac39-45825868aa7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:27:35.929546   33898 system_pods.go:61] "kube-controller-manager-test-preload-627885" [b5d8d2e4-fd46-4927-a3db-2a7e1a639ad4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:27:35.929550   33898 system_pods.go:61] "kube-proxy-rsghz" [b970a468-9ec9-4332-8e11-c561f8ebf03e] Running
	I1126 20:27:35.929556   33898 system_pods.go:61] "kube-scheduler-test-preload-627885" [18c1bc16-c874-4d3e-a293-73e8a37b43b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:27:35.929573   33898 system_pods.go:61] "storage-provisioner" [50240f3b-4247-4be5-96c6-f2264f259b11] Running
	I1126 20:27:35.929581   33898 system_pods.go:74] duration metric: took 5.355976ms to wait for pod list to return data ...
	I1126 20:27:35.929587   33898 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:27:35.932491   33898 default_sa.go:45] found service account: "default"
	I1126 20:27:35.932509   33898 default_sa.go:55] duration metric: took 2.918209ms for default service account to be created ...
	I1126 20:27:35.932516   33898 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:27:35.938030   33898 system_pods.go:86] 7 kube-system pods found
	I1126 20:27:35.938054   33898 system_pods.go:89] "coredns-66bc5c9577-gtrnz" [6cb6d618-58f0-4786-87c2-a5a77f2d810a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:27:35.938067   33898 system_pods.go:89] "etcd-test-preload-627885" [ab588f93-be96-4e82-8337-679999bd6eba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:27:35.938074   33898 system_pods.go:89] "kube-apiserver-test-preload-627885" [5c7b25d5-a6dc-4b42-ac39-45825868aa7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:27:35.938096   33898 system_pods.go:89] "kube-controller-manager-test-preload-627885" [b5d8d2e4-fd46-4927-a3db-2a7e1a639ad4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:27:35.938101   33898 system_pods.go:89] "kube-proxy-rsghz" [b970a468-9ec9-4332-8e11-c561f8ebf03e] Running
	I1126 20:27:35.938112   33898 system_pods.go:89] "kube-scheduler-test-preload-627885" [18c1bc16-c874-4d3e-a293-73e8a37b43b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:27:35.938119   33898 system_pods.go:89] "storage-provisioner" [50240f3b-4247-4be5-96c6-f2264f259b11] Running
	I1126 20:27:35.938125   33898 system_pods.go:126] duration metric: took 5.604641ms to wait for k8s-apps to be running ...
	I1126 20:27:35.938131   33898 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:27:35.938175   33898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:27:35.955130   33898 system_svc.go:56] duration metric: took 16.986954ms WaitForService to wait for kubelet
	I1126 20:27:35.955163   33898 kubeadm.go:587] duration metric: took 333.287441ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:27:35.955181   33898 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:27:35.958074   33898 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1126 20:27:35.958106   33898 node_conditions.go:123] node cpu capacity is 2
	I1126 20:27:35.958118   33898 node_conditions.go:105] duration metric: took 2.931249ms to run NodePressure ...
	I1126 20:27:35.958129   33898 start.go:242] waiting for startup goroutines ...
	I1126 20:27:35.999042   33898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:27:36.012104   33898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:27:36.711343   33898 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1126 20:27:36.712616   33898 addons.go:530] duration metric: took 1.090703448s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1126 20:27:36.712654   33898 start.go:247] waiting for cluster config update ...
	I1126 20:27:36.712664   33898 start.go:256] writing updated cluster config ...
	I1126 20:27:36.712928   33898 ssh_runner.go:195] Run: rm -f paused
	I1126 20:27:36.719073   33898 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:27:36.719586   33898 kapi.go:59] client config for test-preload-627885: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/profiles/test-preload-627885/client.key", CAFile:"/home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:27:36.722832   33898 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gtrnz" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:27:38.730527   33898 pod_ready.go:104] pod "coredns-66bc5c9577-gtrnz" is not "Ready", error: <nil>
	W1126 20:27:40.732871   33898 pod_ready.go:104] pod "coredns-66bc5c9577-gtrnz" is not "Ready", error: <nil>
	W1126 20:27:43.229182   33898 pod_ready.go:104] pod "coredns-66bc5c9577-gtrnz" is not "Ready", error: <nil>
	W1126 20:27:45.728617   33898 pod_ready.go:104] pod "coredns-66bc5c9577-gtrnz" is not "Ready", error: <nil>
	I1126 20:27:47.228761   33898 pod_ready.go:94] pod "coredns-66bc5c9577-gtrnz" is "Ready"
	I1126 20:27:47.228793   33898 pod_ready.go:86] duration metric: took 10.505941126s for pod "coredns-66bc5c9577-gtrnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:47.231826   33898 pod_ready.go:83] waiting for pod "etcd-test-preload-627885" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:47.236420   33898 pod_ready.go:94] pod "etcd-test-preload-627885" is "Ready"
	I1126 20:27:47.236445   33898 pod_ready.go:86] duration metric: took 4.593808ms for pod "etcd-test-preload-627885" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:47.239022   33898 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-627885" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:47.245340   33898 pod_ready.go:94] pod "kube-apiserver-test-preload-627885" is "Ready"
	I1126 20:27:47.245363   33898 pod_ready.go:86] duration metric: took 6.320631ms for pod "kube-apiserver-test-preload-627885" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:47.247553   33898 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-627885" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:47.427135   33898 pod_ready.go:94] pod "kube-controller-manager-test-preload-627885" is "Ready"
	I1126 20:27:47.427162   33898 pod_ready.go:86] duration metric: took 179.582154ms for pod "kube-controller-manager-test-preload-627885" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:47.627028   33898 pod_ready.go:83] waiting for pod "kube-proxy-rsghz" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:48.027539   33898 pod_ready.go:94] pod "kube-proxy-rsghz" is "Ready"
	I1126 20:27:48.027566   33898 pod_ready.go:86] duration metric: took 400.509247ms for pod "kube-proxy-rsghz" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:48.226905   33898 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-627885" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:49.027427   33898 pod_ready.go:94] pod "kube-scheduler-test-preload-627885" is "Ready"
	I1126 20:27:49.027455   33898 pod_ready.go:86] duration metric: took 800.519054ms for pod "kube-scheduler-test-preload-627885" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:27:49.027466   33898 pod_ready.go:40] duration metric: took 12.308347669s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:27:49.068524   33898 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1126 20:27:49.070838   33898 out.go:179] * Done! kubectl is now configured to use "test-preload-627885" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.857757940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764188869857738479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135214,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89925791-1f53-4b95-b431-f9b333955b52 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.858672400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93040339-3ab7-4d5f-85e7-ea251ac87ff4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.858779765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93040339-3ab7-4d5f-85e7-ea251ac87ff4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.858972376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e96b3d1872fccbcb222ccbf8f6c0d7a657713fa4d5501146496e6e8675cc345,PodSandboxId:7897e3529e8ed8c05450889fca22c06dc382d7bcc6b3613539397e1755ca250a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764188858236772657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gtrnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb6d618-58f0-4786-87c2-a5a77f2d810a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b1057a6ea09a9d02cf30cbb73f33c5331d08e2e25d46685c715da7afac1c89,PodSandboxId:e810d8f17a09a8db5c4ce934b3fd832f181e321efb651a8aa2186c5563f77929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764188854728564065,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rsghz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b970a468-9ec9-4332-8e11-c561f8ebf03e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e8a5d349c5030e24f1b70296c306143fd1ad69bf0540c704aa4ab1d1c23000f,PodSandboxId:c3f0f25cc1015c5e75cfb3b15a67168699f805e7929ee92bdb7dc19a796d3660,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764188854593907310,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50240f3b-4247-4be5-96c6-f2264f259b11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783565f79cb0303a7391a3e1531f46698565a7dda5c711ffa50b4e27ca931de0,PodSandboxId:9705f07fe1624bd1c3c1aa5140b52c7ac7c907a2681a09144dd7e39adea12768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764188850343816986,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298f720d068b3260da4a23dc91e82d3e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:475d8c709acb4455175404899fc64ad9ea9019254ff4e966b0c78cf35b4d59e6,PodSandboxId:f5fa090c9adb97d9f6622065342bd1731fa346854a573e90526d758d33d46140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:
CONTAINER_RUNNING,CreatedAt:1764188850301989236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe8a1049df2580678b167bb3c61df6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78e76d44c388dff72558fabc9693084eeb14e55d9fe66411a3d364ab8232e31,PodSandboxId:bab53a5ab10d9b0d1db2394d90a02568d76f58fed050415a12a6a73b66478a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764188850276246248,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f159a0b41a795aaa98dfc43710250fca,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889d4fe862b786214af05bbf697c257a07eef58d371b3c8921ba41f95890ff0,PodSandboxId:dad90b8a83759f5efdc91c71d52b27e65dc3a864de8baa4a401a7ed7631d68e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764188850257034665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96229e0bb0d3064ac2ea9b5d6e79cc4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93040339-3ab7-4d5f-85e7-ea251ac87ff4 name=/runtime.v1.RuntimeServic
e/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.895457751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d0b2b9c-51ce-4453-a421-973305d11a2c name=/runtime.v1.RuntimeService/Version
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.895548799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d0b2b9c-51ce-4453-a421-973305d11a2c name=/runtime.v1.RuntimeService/Version
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.896885162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a318b27a-9e22-4443-8cf8-ba439ca1106a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.897892713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764188869897867198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135214,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a318b27a-9e22-4443-8cf8-ba439ca1106a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.899333911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=578bf6a1-2cbc-4591-8850-dbcc982229a6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.899387785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=578bf6a1-2cbc-4591-8850-dbcc982229a6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.899544337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e96b3d1872fccbcb222ccbf8f6c0d7a657713fa4d5501146496e6e8675cc345,PodSandboxId:7897e3529e8ed8c05450889fca22c06dc382d7bcc6b3613539397e1755ca250a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764188858236772657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gtrnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb6d618-58f0-4786-87c2-a5a77f2d810a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b1057a6ea09a9d02cf30cbb73f33c5331d08e2e25d46685c715da7afac1c89,PodSandboxId:e810d8f17a09a8db5c4ce934b3fd832f181e321efb651a8aa2186c5563f77929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764188854728564065,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rsghz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b970a468-9ec9-4332-8e11-c561f8ebf03e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e8a5d349c5030e24f1b70296c306143fd1ad69bf0540c704aa4ab1d1c23000f,PodSandboxId:c3f0f25cc1015c5e75cfb3b15a67168699f805e7929ee92bdb7dc19a796d3660,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764188854593907310,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50240f3b-4247-4be5-96c6-f2264f259b11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783565f79cb0303a7391a3e1531f46698565a7dda5c711ffa50b4e27ca931de0,PodSandboxId:9705f07fe1624bd1c3c1aa5140b52c7ac7c907a2681a09144dd7e39adea12768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764188850343816986,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298f720d068b3260da4a23dc91e82d3e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:475d8c709acb4455175404899fc64ad9ea9019254ff4e966b0c78cf35b4d59e6,PodSandboxId:f5fa090c9adb97d9f6622065342bd1731fa346854a573e90526d758d33d46140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:
CONTAINER_RUNNING,CreatedAt:1764188850301989236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe8a1049df2580678b167bb3c61df6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78e76d44c388dff72558fabc9693084eeb14e55d9fe66411a3d364ab8232e31,PodSandboxId:bab53a5ab10d9b0d1db2394d90a02568d76f58fed050415a12a6a73b66478a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764188850276246248,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f159a0b41a795aaa98dfc43710250fca,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889d4fe862b786214af05bbf697c257a07eef58d371b3c8921ba41f95890ff0,PodSandboxId:dad90b8a83759f5efdc91c71d52b27e65dc3a864de8baa4a401a7ed7631d68e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764188850257034665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96229e0bb0d3064ac2ea9b5d6e79cc4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=578bf6a1-2cbc-4591-8850-dbcc982229a6 name=/runtime.v1.RuntimeServic
e/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.934702150Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a37290f5-e61e-4995-a85d-09e0a719fdbe name=/runtime.v1.RuntimeService/Version
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.934774695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a37290f5-e61e-4995-a85d-09e0a719fdbe name=/runtime.v1.RuntimeService/Version
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.935836932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0eb521f1-66da-40bc-b08b-6f36203a3fc3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.936298255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764188869936273507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135214,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0eb521f1-66da-40bc-b08b-6f36203a3fc3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.937364305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ee30aeb-d648-4a4b-9136-d884196c5213 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.937434902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ee30aeb-d648-4a4b-9136-d884196c5213 name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.937660905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e96b3d1872fccbcb222ccbf8f6c0d7a657713fa4d5501146496e6e8675cc345,PodSandboxId:7897e3529e8ed8c05450889fca22c06dc382d7bcc6b3613539397e1755ca250a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764188858236772657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gtrnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb6d618-58f0-4786-87c2-a5a77f2d810a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b1057a6ea09a9d02cf30cbb73f33c5331d08e2e25d46685c715da7afac1c89,PodSandboxId:e810d8f17a09a8db5c4ce934b3fd832f181e321efb651a8aa2186c5563f77929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764188854728564065,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rsghz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b970a468-9ec9-4332-8e11-c561f8ebf03e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e8a5d349c5030e24f1b70296c306143fd1ad69bf0540c704aa4ab1d1c23000f,PodSandboxId:c3f0f25cc1015c5e75cfb3b15a67168699f805e7929ee92bdb7dc19a796d3660,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764188854593907310,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50240f3b-4247-4be5-96c6-f2264f259b11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783565f79cb0303a7391a3e1531f46698565a7dda5c711ffa50b4e27ca931de0,PodSandboxId:9705f07fe1624bd1c3c1aa5140b52c7ac7c907a2681a09144dd7e39adea12768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764188850343816986,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298f720d068b3260da4a23dc91e82d3e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:475d8c709acb4455175404899fc64ad9ea9019254ff4e966b0c78cf35b4d59e6,PodSandboxId:f5fa090c9adb97d9f6622065342bd1731fa346854a573e90526d758d33d46140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:
CONTAINER_RUNNING,CreatedAt:1764188850301989236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe8a1049df2580678b167bb3c61df6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78e76d44c388dff72558fabc9693084eeb14e55d9fe66411a3d364ab8232e31,PodSandboxId:bab53a5ab10d9b0d1db2394d90a02568d76f58fed050415a12a6a73b66478a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764188850276246248,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f159a0b41a795aaa98dfc43710250fca,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889d4fe862b786214af05bbf697c257a07eef58d371b3c8921ba41f95890ff0,PodSandboxId:dad90b8a83759f5efdc91c71d52b27e65dc3a864de8baa4a401a7ed7631d68e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764188850257034665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96229e0bb0d3064ac2ea9b5d6e79cc4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ee30aeb-d648-4a4b-9136-d884196c5213 name=/runtime.v1.RuntimeServic
e/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.969064866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c4846e7-4826-45b8-b766-2ec380bfecd8 name=/runtime.v1.RuntimeService/Version
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.969167687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c4846e7-4826-45b8-b766-2ec380bfecd8 name=/runtime.v1.RuntimeService/Version
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.973133404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c5304fd-9a62-43b2-a333-201a26ad13b7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.973551017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764188869973530077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135214,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c5304fd-9a62-43b2-a333-201a26ad13b7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.974844284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63ef08fb-3f29-4ee6-9d62-9af57041a4cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.975159316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63ef08fb-3f29-4ee6-9d62-9af57041a4cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 26 20:27:49 test-preload-627885 crio[843]: time="2025-11-26 20:27:49.975566718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e96b3d1872fccbcb222ccbf8f6c0d7a657713fa4d5501146496e6e8675cc345,PodSandboxId:7897e3529e8ed8c05450889fca22c06dc382d7bcc6b3613539397e1755ca250a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764188858236772657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gtrnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb6d618-58f0-4786-87c2-a5a77f2d810a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b1057a6ea09a9d02cf30cbb73f33c5331d08e2e25d46685c715da7afac1c89,PodSandboxId:e810d8f17a09a8db5c4ce934b3fd832f181e321efb651a8aa2186c5563f77929,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764188854728564065,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rsghz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b970a468-9ec9-4332-8e11-c561f8ebf03e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e8a5d349c5030e24f1b70296c306143fd1ad69bf0540c704aa4ab1d1c23000f,PodSandboxId:c3f0f25cc1015c5e75cfb3b15a67168699f805e7929ee92bdb7dc19a796d3660,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764188854593907310,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50240f3b-4247-4be5-96c6-f2264f259b11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783565f79cb0303a7391a3e1531f46698565a7dda5c711ffa50b4e27ca931de0,PodSandboxId:9705f07fe1624bd1c3c1aa5140b52c7ac7c907a2681a09144dd7e39adea12768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764188850343816986,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298f720d068b3260da4a23dc91e82d3e,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:475d8c709acb4455175404899fc64ad9ea9019254ff4e966b0c78cf35b4d59e6,PodSandboxId:f5fa090c9adb97d9f6622065342bd1731fa346854a573e90526d758d33d46140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:
CONTAINER_RUNNING,CreatedAt:1764188850301989236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe8a1049df2580678b167bb3c61df6,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78e76d44c388dff72558fabc9693084eeb14e55d9fe66411a3d364ab8232e31,PodSandboxId:bab53a5ab10d9b0d1db2394d90a02568d76f58fed050415a12a6a73b66478a0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764188850276246248,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f159a0b41a795aaa98dfc43710250fca,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889d4fe862b786214af05bbf697c257a07eef58d371b3c8921ba41f95890ff0,PodSandboxId:dad90b8a83759f5efdc91c71d52b27e65dc3a864de8baa4a401a7ed7631d68e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764188850257034665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-627885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f96229e0bb0d3064ac2ea9b5d6e79cc4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63ef08fb-3f29-4ee6-9d62-9af57041a4cb name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	0e96b3d1872fc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   1                   7897e3529e8ed       coredns-66bc5c9577-gtrnz                      kube-system
	85b1057a6ea09       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   15 seconds ago      Running             kube-proxy                1                   e810d8f17a09a       kube-proxy-rsghz                              kube-system
	1e8a5d349c503       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   c3f0f25cc1015       storage-provisioner                           kube-system
	783565f79cb03       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   19 seconds ago      Running             kube-scheduler            1                   9705f07fe1624       kube-scheduler-test-preload-627885            kube-system
	475d8c709acb4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   19 seconds ago      Running             etcd                      1                   f5fa090c9adb9       etcd-test-preload-627885                      kube-system
	d78e76d44c388       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   19 seconds ago      Running             kube-controller-manager   1                   bab53a5ab10d9       kube-controller-manager-test-preload-627885   kube-system
	d889d4fe862b7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   19 seconds ago      Running             kube-apiserver            1                   dad90b8a83759       kube-apiserver-test-preload-627885            kube-system
	
	
	==> coredns [0e96b3d1872fccbcb222ccbf8f6c0d7a657713fa4d5501146496e6e8675cc345] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56372 - 19620 "HINFO IN 691804716128660103.1712563432855869671. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.419081531s
	
	
	==> describe nodes <==
	Name:               test-preload-627885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-627885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=test-preload-627885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_26_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:26:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-627885
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:27:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:27:35 +0000   Wed, 26 Nov 2025 20:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:27:35 +0000   Wed, 26 Nov 2025 20:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:27:35 +0000   Wed, 26 Nov 2025 20:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:27:35 +0000   Wed, 26 Nov 2025 20:27:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    test-preload-627885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 65816bc4929548819973778935fcb046
	  System UUID:                65816bc4-9295-4881-9973-778935fcb046
	  Boot ID:                    7b9b14ba-e4e3-4487-ad6c-84e29477d753
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gtrnz                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     92s
	  kube-system                 etcd-test-preload-627885                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         97s
	  kube-system                 kube-apiserver-test-preload-627885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-test-preload-627885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-rsghz                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-test-preload-627885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 90s                  kube-proxy       
	  Normal   Starting                 15s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node test-preload-627885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node test-preload-627885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node test-preload-627885 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 98s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     97s                  kubelet          Node test-preload-627885 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  97s                  kubelet          Node test-preload-627885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s                  kubelet          Node test-preload-627885 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                97s                  kubelet          Node test-preload-627885 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  97s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           93s                  node-controller  Node test-preload-627885 event: Registered Node test-preload-627885 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-627885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-627885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-627885 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-627885 has been rebooted, boot id: 7b9b14ba-e4e3-4487-ad6c-84e29477d753
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-627885 event: Registered Node test-preload-627885 in Controller
	
	
	==> dmesg <==
	[Nov26 20:27] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004405] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.016530] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085702] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.096173] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.529932] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.030373] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [475d8c709acb4455175404899fc64ad9ea9019254ff4e966b0c78cf35b4d59e6] <==
	{"level":"warn","ts":"2025-11-26T20:27:32.355770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.387130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.411048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.428973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.430879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.440546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.449868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.459504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.465753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.475844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.486425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.495693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.512856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.523917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.540708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.552994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.562722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.578185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.580935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.594669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.624228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.651196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.663987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.673859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:27:32.731539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56506","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:27:50 up 0 min,  0 users,  load average: 0.65, 0.18, 0.06
	Linux test-preload-627885 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d889d4fe862b786214af05bbf697c257a07eef58d371b3c8921ba41f95890ff0] <==
	I1126 20:27:33.519543       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1126 20:27:33.528926       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:27:33.529069       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:27:33.529100       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:27:33.529106       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:27:33.529111       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:27:33.536777       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1126 20:27:33.551411       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:27:33.559144       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:27:33.586183       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:27:33.588769       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:27:33.588868       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:27:33.595026       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:27:33.597335       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:27:33.597451       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1126 20:27:33.597494       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:27:34.185246       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:27:34.404176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:27:35.435259       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:27:35.494225       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:27:35.530069       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:27:35.538136       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:27:36.871291       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:27:37.170410       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:27:37.271380       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d78e76d44c388dff72558fabc9693084eeb14e55d9fe66411a3d364ab8232e31] <==
	I1126 20:27:36.867855       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:27:36.870113       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:27:36.870188       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:27:36.872438       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:27:36.874923       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:27:36.875016       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:27:36.875029       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:27:36.875115       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:27:36.875359       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:27:36.878770       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:27:36.880025       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:27:36.880032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:27:36.883393       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:27:36.883617       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:27:36.883775       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:27:36.883899       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-627885"
	I1126 20:27:36.883963       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:27:36.888901       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:27:36.892181       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:27:36.892293       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:27:36.893434       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:27:36.896757       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1126 20:27:36.904156       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:27:36.908171       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:27:36.920438       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [85b1057a6ea09a9d02cf30cbb73f33c5331d08e2e25d46685c715da7afac1c89] <==
	I1126 20:27:34.953363       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:27:35.055447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:27:35.055485       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.3"]
	E1126 20:27:35.055635       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:27:35.100301       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1126 20:27:35.100347       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1126 20:27:35.100374       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:27:35.110314       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:27:35.110746       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:27:35.110779       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:27:35.115436       1 config.go:200] "Starting service config controller"
	I1126 20:27:35.115477       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:27:35.115541       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:27:35.115564       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:27:35.115650       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:27:35.115654       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:27:35.115790       1 config.go:309] "Starting node config controller"
	I1126 20:27:35.115818       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:27:35.216078       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:27:35.216096       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:27:35.216121       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:27:35.216159       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [783565f79cb0303a7391a3e1531f46698565a7dda5c711ffa50b4e27ca931de0] <==
	I1126 20:27:31.736075       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:27:33.554536       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:27:33.554641       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:27:33.576809       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:27:33.576852       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:27:33.576907       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:27:33.576917       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:27:33.576928       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:27:33.576953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:27:33.577273       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:27:33.577428       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:27:33.677185       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:27:33.677314       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:27:33.677426       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:27:33 test-preload-627885 kubelet[1172]: I1126 20:27:33.569974    1172 setters.go:543] "Node became not ready" node="test-preload-627885" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T20:27:33Z","lastTransitionTime":"2025-11-26T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 26 20:27:33 test-preload-627885 kubelet[1172]: E1126 20:27:33.598461    1172 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-627885\" already exists" pod="kube-system/kube-controller-manager-test-preload-627885"
	Nov 26 20:27:33 test-preload-627885 kubelet[1172]: I1126 20:27:33.598495    1172 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-627885"
	Nov 26 20:27:33 test-preload-627885 kubelet[1172]: E1126 20:27:33.623500    1172 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-627885\" already exists" pod="kube-system/kube-scheduler-test-preload-627885"
	Nov 26 20:27:33 test-preload-627885 kubelet[1172]: I1126 20:27:33.623545    1172 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-627885"
	Nov 26 20:27:33 test-preload-627885 kubelet[1172]: E1126 20:27:33.635984    1172 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-test-preload-627885\" already exists" pod="kube-system/etcd-test-preload-627885"
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: I1126 20:27:34.079778    1172 apiserver.go:52] "Watching apiserver"
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: E1126 20:27:34.086071    1172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-gtrnz" podUID="6cb6d618-58f0-4786-87c2-a5a77f2d810a"
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: I1126 20:27:34.118697    1172 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: I1126 20:27:34.164903    1172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b970a468-9ec9-4332-8e11-c561f8ebf03e-xtables-lock\") pod \"kube-proxy-rsghz\" (UID: \"b970a468-9ec9-4332-8e11-c561f8ebf03e\") " pod="kube-system/kube-proxy-rsghz"
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: I1126 20:27:34.164950    1172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/50240f3b-4247-4be5-96c6-f2264f259b11-tmp\") pod \"storage-provisioner\" (UID: \"50240f3b-4247-4be5-96c6-f2264f259b11\") " pod="kube-system/storage-provisioner"
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: I1126 20:27:34.164975    1172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b970a468-9ec9-4332-8e11-c561f8ebf03e-lib-modules\") pod \"kube-proxy-rsghz\" (UID: \"b970a468-9ec9-4332-8e11-c561f8ebf03e\") " pod="kube-system/kube-proxy-rsghz"
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: E1126 20:27:34.165123    1172 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: E1126 20:27:34.165198    1172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cb6d618-58f0-4786-87c2-a5a77f2d810a-config-volume podName:6cb6d618-58f0-4786-87c2-a5a77f2d810a nodeName:}" failed. No retries permitted until 2025-11-26 20:27:34.665180145 +0000 UTC m=+6.682236413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6cb6d618-58f0-4786-87c2-a5a77f2d810a-config-volume") pod "coredns-66bc5c9577-gtrnz" (UID: "6cb6d618-58f0-4786-87c2-a5a77f2d810a") : object "kube-system"/"coredns" not registered
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: E1126 20:27:34.668488    1172 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 26 20:27:34 test-preload-627885 kubelet[1172]: E1126 20:27:34.671496    1172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cb6d618-58f0-4786-87c2-a5a77f2d810a-config-volume podName:6cb6d618-58f0-4786-87c2-a5a77f2d810a nodeName:}" failed. No retries permitted until 2025-11-26 20:27:35.671467402 +0000 UTC m=+7.688523682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6cb6d618-58f0-4786-87c2-a5a77f2d810a-config-volume") pod "coredns-66bc5c9577-gtrnz" (UID: "6cb6d618-58f0-4786-87c2-a5a77f2d810a") : object "kube-system"/"coredns" not registered
	Nov 26 20:27:35 test-preload-627885 kubelet[1172]: E1126 20:27:35.677325    1172 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 26 20:27:35 test-preload-627885 kubelet[1172]: E1126 20:27:35.677400    1172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cb6d618-58f0-4786-87c2-a5a77f2d810a-config-volume podName:6cb6d618-58f0-4786-87c2-a5a77f2d810a nodeName:}" failed. No retries permitted until 2025-11-26 20:27:37.677386394 +0000 UTC m=+9.694442674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6cb6d618-58f0-4786-87c2-a5a77f2d810a-config-volume") pod "coredns-66bc5c9577-gtrnz" (UID: "6cb6d618-58f0-4786-87c2-a5a77f2d810a") : object "kube-system"/"coredns" not registered
	Nov 26 20:27:35 test-preload-627885 kubelet[1172]: I1126 20:27:35.703779    1172 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:27:38 test-preload-627885 kubelet[1172]: E1126 20:27:38.169802    1172 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764188858168783526  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135214}  inodes_used:{value:64}}"
	Nov 26 20:27:38 test-preload-627885 kubelet[1172]: E1126 20:27:38.169825    1172 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764188858168783526  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135214}  inodes_used:{value:64}}"
	Nov 26 20:27:40 test-preload-627885 kubelet[1172]: I1126 20:27:40.286471    1172 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:27:47 test-preload-627885 kubelet[1172]: I1126 20:27:47.041828    1172 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:27:48 test-preload-627885 kubelet[1172]: E1126 20:27:48.174213    1172 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764188868172015349  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135214}  inodes_used:{value:64}}"
	Nov 26 20:27:48 test-preload-627885 kubelet[1172]: E1126 20:27:48.174241    1172 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764188868172015349  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135214}  inodes_used:{value:64}}"
	
	
	==> storage-provisioner [1e8a5d349c5030e24f1b70296c306143fd1ad69bf0540c704aa4ab1d1c23000f] <==
	I1126 20:27:34.788892       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-627885 -n test-preload-627885
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-627885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-627885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-627885
--- FAIL: TestPreload (150.37s)

                                                
                                    

Test pass (309/351)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.28
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.65
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.64
22 TestOffline 77.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 132.09
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.52
35 TestAddons/parallel/Registry 18.91
36 TestAddons/parallel/RegistryCreds 0.66
38 TestAddons/parallel/InspektorGadget 10.74
39 TestAddons/parallel/MetricsServer 6.79
41 TestAddons/parallel/CSI 53.41
42 TestAddons/parallel/Headlamp 21.45
43 TestAddons/parallel/CloudSpanner 5.84
44 TestAddons/parallel/LocalPath 12.27
45 TestAddons/parallel/NvidiaDevicePlugin 5.73
46 TestAddons/parallel/Yakd 12.47
48 TestAddons/StoppedEnableDisable 89.95
49 TestCertOptions 43.25
50 TestCertExpiration 294.8
52 TestForceSystemdFlag 59.8
53 TestForceSystemdEnv 40.3
58 TestErrorSpam/setup 37.76
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.67
61 TestErrorSpam/pause 1.55
62 TestErrorSpam/unpause 1.86
63 TestErrorSpam/stop 5.56
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 84.8
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.25
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
75 TestFunctional/serial/CacheCmd/cache/add_local 1.11
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 36.82
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.36
86 TestFunctional/serial/LogsFileCmd 1.37
87 TestFunctional/serial/InvalidService 4.49
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 30.87
91 TestFunctional/parallel/DryRun 0.24
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.8
97 TestFunctional/parallel/ServiceCmdConnect 8.44
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 44.69
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 1.28
103 TestFunctional/parallel/MySQL 28.72
104 TestFunctional/parallel/FileSync 0.16
105 TestFunctional/parallel/CertSync 1.08
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
113 TestFunctional/parallel/License 0.27
123 TestFunctional/parallel/Version/short 0.06
124 TestFunctional/parallel/Version/components 0.77
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.62
129 TestFunctional/parallel/ImageCommands/ImageBuild 9.76
130 TestFunctional/parallel/ImageCommands/Setup 0.51
131 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.96
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
140 TestFunctional/parallel/ProfileCmd/profile_list 0.31
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
145 TestFunctional/parallel/MountCmd/any-port 7.97
146 TestFunctional/parallel/ServiceCmd/List 0.26
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.23
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
149 TestFunctional/parallel/ServiceCmd/Format 0.26
150 TestFunctional/parallel/ServiceCmd/URL 0.27
151 TestFunctional/parallel/MountCmd/specific-port 1.53
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.25
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 209.12
161 TestMultiControlPlane/serial/DeployApp 5.95
162 TestMultiControlPlane/serial/PingHostFromPods 1.34
163 TestMultiControlPlane/serial/AddWorkerNode 44.72
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
166 TestMultiControlPlane/serial/CopyFile 10.83
167 TestMultiControlPlane/serial/StopSecondaryNode 75.2
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
169 TestMultiControlPlane/serial/RestartSecondaryNode 41.11
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 385.53
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.88
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
174 TestMultiControlPlane/serial/StopCluster 257.16
175 TestMultiControlPlane/serial/RestartCluster 96.05
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
177 TestMultiControlPlane/serial/AddSecondaryNode 83.5
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.72
183 TestJSONOutput/start/Command 88.91
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.74
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.64
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.52
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 81.02
215 TestMountStart/serial/StartWithMountFirst 22.36
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 22.1
218 TestMountStart/serial/VerifyMountSecond 0.29
219 TestMountStart/serial/DeleteFirst 0.7
220 TestMountStart/serial/VerifyMountPostDelete 0.29
221 TestMountStart/serial/Stop 1.33
222 TestMountStart/serial/RestartStopped 20.74
223 TestMountStart/serial/VerifyMountPostStop 0.3
226 TestMultiNode/serial/FreshStart2Nodes 127.07
227 TestMultiNode/serial/DeployApp2Nodes 5.41
228 TestMultiNode/serial/PingHostFrom2Pods 0.86
229 TestMultiNode/serial/AddNode 45.03
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.46
232 TestMultiNode/serial/CopyFile 5.95
233 TestMultiNode/serial/StopNode 2.51
234 TestMultiNode/serial/StartAfterStop 38.13
235 TestMultiNode/serial/RestartKeepsNodes 297.62
236 TestMultiNode/serial/DeleteNode 2.58
237 TestMultiNode/serial/StopMultiNode 175.49
238 TestMultiNode/serial/RestartMultiNode 98.01
239 TestMultiNode/serial/ValidateNameConflict 41.04
246 TestScheduledStopUnix 111
250 TestRunningBinaryUpgrade 382.49
252 TestKubernetesUpgrade 210.72
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/StartWithK8s 82.91
264 TestNetworkPlugins/group/false 3.39
268 TestStoppedBinaryUpgrade/Setup 0.83
269 TestStoppedBinaryUpgrade/Upgrade 135.75
270 TestNoKubernetes/serial/StartWithStopK8s 48.46
271 TestNoKubernetes/serial/Start 50.91
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
281 TestPause/serial/Start 97.96
282 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
284 TestNoKubernetes/serial/ProfileList 0.99
285 TestNoKubernetes/serial/Stop 1.35
286 TestNoKubernetes/serial/StartNoArgs 47.77
287 TestISOImage/Setup 30.7
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
290 TestISOImage/Binaries/crictl 0.2
291 TestISOImage/Binaries/curl 0.18
292 TestISOImage/Binaries/docker 0.18
293 TestISOImage/Binaries/git 0.2
294 TestISOImage/Binaries/iptables 0.19
295 TestISOImage/Binaries/podman 0.19
296 TestISOImage/Binaries/rsync 0.18
297 TestISOImage/Binaries/socat 0.18
298 TestISOImage/Binaries/wget 0.2
299 TestISOImage/Binaries/VBoxControl 0.2
300 TestISOImage/Binaries/VBoxService 0.19
301 TestPause/serial/SecondStartNoReconfiguration 50.3
302 TestPause/serial/Pause 0.74
303 TestPause/serial/VerifyStatus 0.23
304 TestPause/serial/Unpause 0.71
305 TestPause/serial/PauseAgain 0.97
306 TestPause/serial/DeletePaused 0.9
307 TestPause/serial/VerifyDeletedResources 0.67
308 TestNetworkPlugins/group/auto/Start 94.7
309 TestNetworkPlugins/group/kindnet/Start 91.59
310 TestNetworkPlugins/group/auto/KubeletFlags 0.18
311 TestNetworkPlugins/group/auto/NetCatPod 11.28
312 TestNetworkPlugins/group/auto/DNS 0.15
313 TestNetworkPlugins/group/auto/Localhost 0.15
314 TestNetworkPlugins/group/auto/HairPin 0.14
315 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/Start 73.83
317 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
318 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
319 TestNetworkPlugins/group/custom-flannel/Start 85.44
320 TestNetworkPlugins/group/kindnet/DNS 0.18
321 TestNetworkPlugins/group/kindnet/Localhost 0.15
322 TestNetworkPlugins/group/kindnet/HairPin 0.14
323 TestNetworkPlugins/group/enable-default-cni/Start 117.21
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/flannel/Start 75.05
326 TestNetworkPlugins/group/calico/KubeletFlags 0.2
327 TestNetworkPlugins/group/calico/NetCatPod 12.3
328 TestNetworkPlugins/group/calico/DNS 0.2
329 TestNetworkPlugins/group/calico/Localhost 0.17
330 TestNetworkPlugins/group/calico/HairPin 0.19
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
332 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
333 TestNetworkPlugins/group/custom-flannel/DNS 0.19
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
336 TestNetworkPlugins/group/bridge/Start 89.72
338 TestStartStop/group/old-k8s-version/serial/FirstStart 69.61
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
345 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
346 TestNetworkPlugins/group/flannel/NetCatPod 13.32
347 TestNetworkPlugins/group/flannel/DNS 0.18
348 TestNetworkPlugins/group/flannel/Localhost 0.15
349 TestNetworkPlugins/group/flannel/HairPin 0.18
351 TestStartStop/group/no-preload/serial/FirstStart 74.63
353 TestStartStop/group/embed-certs/serial/FirstStart 91.61
354 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
355 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
356 TestNetworkPlugins/group/bridge/NetCatPod 12.27
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.38
358 TestStartStop/group/old-k8s-version/serial/Stop 81.69
359 TestNetworkPlugins/group/bridge/DNS 0.15
360 TestNetworkPlugins/group/bridge/Localhost 0.17
361 TestNetworkPlugins/group/bridge/HairPin 0.16
363 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.93
364 TestStartStop/group/no-preload/serial/DeployApp 10.31
365 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
366 TestStartStop/group/no-preload/serial/Stop 72.74
367 TestStartStop/group/embed-certs/serial/DeployApp 10.28
368 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
370 TestStartStop/group/old-k8s-version/serial/SecondStart 47.83
371 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
372 TestStartStop/group/embed-certs/serial/Stop 83.7
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
374 TestStartStop/group/default-k8s-diff-port/serial/Stop 89.53
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
376 TestStartStop/group/no-preload/serial/SecondStart 55.83
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 9.01
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
380 TestStartStop/group/old-k8s-version/serial/Pause 2.63
382 TestStartStop/group/newest-cni/serial/FirstStart 47.07
383 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
384 TestStartStop/group/embed-certs/serial/SecondStart 49.71
385 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
386 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 81.63
387 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
388 TestStartStop/group/newest-cni/serial/DeployApp 0
389 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.96
390 TestStartStop/group/newest-cni/serial/Stop 11.21
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
392 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
393 TestStartStop/group/no-preload/serial/Pause 3.24
394 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
395 TestStartStop/group/newest-cni/serial/SecondStart 42.31
397 TestISOImage/PersistentMounts//data 0.2
398 TestISOImage/PersistentMounts//var/lib/docker 0.17
399 TestISOImage/PersistentMounts//var/lib/cni 0.18
400 TestISOImage/PersistentMounts//var/lib/kubelet 0.16
401 TestISOImage/PersistentMounts//var/lib/minikube 0.17
402 TestISOImage/PersistentMounts//var/lib/toolbox 0.17
403 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
404 TestISOImage/VersionJSON 0.18
405 TestISOImage/eBPFSupport 0.18
406 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
407 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
408 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
409 TestStartStop/group/embed-certs/serial/Pause 3.1
410 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
412 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
413 TestStartStop/group/newest-cni/serial/Pause 3.39
414 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
415 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
416 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
417 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.54
x
+
TestDownloadOnly/v1.28.0/json-events (10.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-843204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-843204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.274753947s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1126 19:35:02.865430   11003 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1126 19:35:02.865504   11003 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-843204
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-843204: exit status 85 (73.048686ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-843204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-843204 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:34:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:34:52.641630   11015 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:34:52.641849   11015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:52.641857   11015 out.go:374] Setting ErrFile to fd 2...
	I1126 19:34:52.641861   11015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:34:52.642027   11015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	W1126 19:34:52.642160   11015 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21974-7091/.minikube/config/config.json: open /home/jenkins/minikube-integration/21974-7091/.minikube/config/config.json: no such file or directory
	I1126 19:34:52.642577   11015 out.go:368] Setting JSON to true
	I1126 19:34:52.643458   11015 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1043,"bootTime":1764184650,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:34:52.643511   11015 start.go:143] virtualization: kvm guest
	I1126 19:34:52.647564   11015 out.go:99] [download-only-843204] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1126 19:34:52.647729   11015 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball: no such file or directory
	I1126 19:34:52.647785   11015 notify.go:221] Checking for updates...
	I1126 19:34:52.648971   11015 out.go:171] MINIKUBE_LOCATION=21974
	I1126 19:34:52.650241   11015 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:34:52.651562   11015 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 19:34:52.652824   11015 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 19:34:52.653906   11015 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1126 19:34:52.655955   11015 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1126 19:34:52.656217   11015 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:34:53.162828   11015 out.go:99] Using the kvm2 driver based on user configuration
	I1126 19:34:53.162868   11015 start.go:309] selected driver: kvm2
	I1126 19:34:53.162876   11015 start.go:927] validating driver "kvm2" against <nil>
	I1126 19:34:53.163264   11015 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:34:53.163798   11015 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1126 19:34:53.163962   11015 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1126 19:34:53.163995   11015 cni.go:84] Creating CNI manager for ""
	I1126 19:34:53.164050   11015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1126 19:34:53.164061   11015 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1126 19:34:53.164131   11015 start.go:353] cluster config:
	{Name:download-only-843204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-843204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:34:53.164358   11015 iso.go:125] acquiring lock: {Name:mkfe3dbb7c1a56d5a5080a4e71d079899ad19ff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 19:34:53.165983   11015 out.go:99] Downloading VM boot image ...
	I1126 19:34:53.166023   11015 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21974-7091/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1126 19:34:57.805968   11015 out.go:99] Starting "download-only-843204" primary control-plane node in "download-only-843204" cluster
	I1126 19:34:57.806000   11015 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 19:34:57.824992   11015 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1126 19:34:57.825027   11015 cache.go:65] Caching tarball of preloaded images
	I1126 19:34:57.825201   11015 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 19:34:57.826952   11015 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1126 19:34:57.826971   11015 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1126 19:34:57.853932   11015 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1126 19:34:57.854071   11015 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1126 19:35:02.254625   11015 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1126 19:35:02.254987   11015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/download-only-843204/config.json ...
	I1126 19:35:02.255027   11015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/download-only-843204/config.json: {Name:mkf5dae80ab6bdb45bffbc60758b61716bb2bc6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:35:02.255233   11015 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 19:35:02.255432   11015 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21974-7091/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-843204 host does not exist
	  To start a cluster, run: "minikube start -p download-only-843204"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-843204
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-499024 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-499024 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.654599727s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1126 19:35:06.903528   11003 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1126 19:35:06.903561   11003 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-499024
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-499024: exit status 85 (75.23443ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-843204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-843204 │ jenkins │ v1.37.0 │ 26 Nov 25 19:34 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ 26 Nov 25 19:35 UTC │
	│ delete  │ -p download-only-843204                                                                                                                                                 │ download-only-843204 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ 26 Nov 25 19:35 UTC │
	│ start   │ -o=json --download-only -p download-only-499024 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-499024 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:35:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:35:03.298347   11226 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:35:03.298428   11226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:35:03.298436   11226 out.go:374] Setting ErrFile to fd 2...
	I1126 19:35:03.298440   11226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:35:03.298607   11226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 19:35:03.299047   11226 out.go:368] Setting JSON to true
	I1126 19:35:03.299812   11226 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1053,"bootTime":1764184650,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:35:03.299859   11226 start.go:143] virtualization: kvm guest
	I1126 19:35:03.301644   11226 out.go:99] [download-only-499024] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 19:35:03.301777   11226 notify.go:221] Checking for updates...
	I1126 19:35:03.303266   11226 out.go:171] MINIKUBE_LOCATION=21974
	I1126 19:35:03.304707   11226 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:35:03.306193   11226 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 19:35:03.307574   11226 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 19:35:03.309233   11226 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-499024 host does not exist
	  To start a cluster, run: "minikube start -p download-only-499024"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-499024
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1126 19:35:07.591290   11003 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-630783 --alsologtostderr --binary-mirror http://127.0.0.1:33899 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-630783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-630783
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (77.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-482178 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-482178 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.710332061s)
helpers_test.go:175: Cleaning up "offline-crio-482178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-482178
--- PASS: TestOffline (77.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-198878
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-198878: exit status 85 (63.032056ms)

                                                
                                                
-- stdout --
	* Profile "addons-198878" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-198878"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-198878
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-198878: exit status 85 (64.051271ms)

                                                
                                                
-- stdout --
	* Profile "addons-198878" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-198878"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (132.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-198878 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-198878 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m12.085505523s)
--- PASS: TestAddons/Setup (132.09s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-198878 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-198878 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-198878 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-198878 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b398ed93-d3e3-42f0-9ff8-eb0a88b0786a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b398ed93-d3e3-42f0-9ff8-eb0a88b0786a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005761418s
addons_test.go:694: (dbg) Run:  kubectl --context addons-198878 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-198878 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-198878 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.300402ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-frf72" [7122caf5-586e-4824-aa05-e6968244eddd] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.013751083s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-6ltms" [2e78d651-29c0-42f1-a079-f759abd8acb2] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00734433s
addons_test.go:392: (dbg) Run:  kubectl --context addons-198878 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-198878 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-198878 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.046461159s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.91s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.318275ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-198878
addons_test.go:332: (dbg) Run:  kubectl --context addons-198878 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-v2f5n" [4fd8200a-d1bd-4b93-8c3b-a9e0cc32e159] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005037255s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 addons disable inspektor-gadget --alsologtostderr -v=1: (5.73424526s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.982715ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8krt2" [437ee4fe-01d9-47d9-8864-e19c70cc2b3e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003421571s
addons_test.go:463: (dbg) Run:  kubectl --context addons-198878 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1126 19:37:57.921798   11003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1126 19:37:57.932902   11003 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1126 19:37:57.932929   11003 kapi.go:107] duration metric: took 11.150005ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 11.158607ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-198878 create -f testdata/csi-hostpath-driver/pvc.yaml
2025/11/26 19:37:57 [DEBUG] GET http://192.168.39.123:5000
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-198878 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d5d4e1fa-3509-4051-9fde-a973f28c7293] Pending
helpers_test.go:352: "task-pv-pod" [d5d4e1fa-3509-4051-9fde-a973f28c7293] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d5d4e1fa-3509-4051-9fde-a973f28c7293] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005246577s
addons_test.go:572: (dbg) Run:  kubectl --context addons-198878 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-198878 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-198878 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-198878 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-198878 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-198878 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-198878 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [74b2b916-fb11-4c95-a928-48b6a269d1ca] Pending
helpers_test.go:352: "task-pv-pod-restore" [74b2b916-fb11-4c95-a928-48b6a269d1ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [74b2b916-fb11-4c95-a928-48b6a269d1ca] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004558072s
addons_test.go:614: (dbg) Run:  kubectl --context addons-198878 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-198878 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-198878 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.015020568s)
--- PASS: TestAddons/parallel/CSI (53.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-198878 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-198878 --alsologtostderr -v=1: (1.185662597s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-44dhn" [575bccd8-e6ab-49e0-ad2c-ef803ec43b00] Pending
helpers_test.go:352: "headlamp-dfcdc64b-44dhn" [575bccd8-e6ab-49e0-ad2c-ef803ec43b00] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-44dhn" [575bccd8-e6ab-49e0-ad2c-ef803ec43b00] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004604783s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 addons disable headlamp --alsologtostderr -v=1: (6.260776269s)
--- PASS: TestAddons/parallel/Headlamp (21.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-jh27b" [875271dc-a42b-4a12-a9ff-08ce170824f4] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007922529s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.84s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-198878 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-198878 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [de3d2cb8-8760-4bdb-b05e-87e01f02e717] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [de3d2cb8-8760-4bdb-b05e-87e01f02e717] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [de3d2cb8-8760-4bdb-b05e-87e01f02e717] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00443676s
addons_test.go:967: (dbg) Run:  kubectl --context addons-198878 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 ssh "cat /opt/local-path-provisioner/pvc-a22e263c-d92b-4e58-83ac-82f62be484b9_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-198878 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-198878 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rhjld" [67364572-4090-46f0-bd16-407a2f2eecf7] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0099935s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-68klr" [3a13fb86-8cba-476e-b812-5f48250091d3] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.037503038s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 addons disable yakd --alsologtostderr -v=1: (6.427107221s)
--- PASS: TestAddons/parallel/Yakd (12.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (89.95s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-198878
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-198878: (1m29.739018786s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-198878
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-198878
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-198878
--- PASS: TestAddons/StoppedEnableDisable (89.95s)

                                                
                                    
x
+
TestCertOptions (43.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-128870 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-128870 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (41.912706329s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-128870 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-128870 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-128870 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-128870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-128870
--- PASS: TestCertOptions (43.25s)

                                                
                                    
x
+
TestCertExpiration (294.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-649284 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-649284 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m0.397102149s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-649284 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-649284 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (53.42447464s)
helpers_test.go:175: Cleaning up "cert-expiration-649284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-649284
--- PASS: TestCertExpiration (294.80s)

                                                
                                    
x
+
TestForceSystemdFlag (59.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-885659 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-885659 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.652437656s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-885659 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-885659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-885659
--- PASS: TestForceSystemdFlag (59.80s)

                                                
                                    
x
+
TestForceSystemdEnv (40.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-626886 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-626886 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (38.615213652s)
helpers_test.go:175: Cleaning up "force-systemd-env-626886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-626886
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-626886: (1.682364365s)
--- PASS: TestForceSystemdEnv (40.30s)

                                                
                                    
x
+
TestErrorSpam/setup (37.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-803636 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-803636 --driver=kvm2  --container-runtime=crio
E1126 19:42:21.025831   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:21.032300   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:21.043752   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:21.065167   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:21.106616   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:21.188170   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:21.349830   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:21.671318   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:22.313426   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:23.595055   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:26.158036   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:31.280126   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:42:41.521575   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-803636 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-803636 --driver=kvm2  --container-runtime=crio: (37.755084936s)
--- PASS: TestErrorSpam/setup (37.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 status
--- PASS: TestErrorSpam/status (0.67s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (5.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 stop: (2.047224892s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 stop: (1.569651151s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-803636 --log_dir /tmp/nospam-803636 stop: (1.944492476s)
--- PASS: TestErrorSpam/stop (5.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21974-7091/.minikube/files/etc/test/nested/copy/11003/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110910 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1126 19:43:02.003049   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:43:42.965604   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-110910 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m24.800917354s)
--- PASS: TestFunctional/serial/StartWithProxy (84.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1126 19:44:20.378993   11003 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110910 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-110910 --alsologtostderr -v=8: (37.244690592s)
functional_test.go:678: soft start took 37.245400769s for "functional-110910" cluster.
I1126 19:44:57.624008   11003 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-110910 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-110910 cache add registry.k8s.io/pause:3.1: (1.025871092s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-110910 cache add registry.k8s.io/pause:3.3: (1.08295229s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-110910 cache add registry.k8s.io/pause:latest: (1.089952285s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-110910 /tmp/TestFunctionalserialCacheCmdcacheadd_local2228729105/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cache add minikube-local-cache-test:functional-110910
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cache delete minikube-local-cache-test:functional-110910
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-110910
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (182.24034ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 kubectl -- --context functional-110910 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-110910 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110910 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1126 19:45:04.887259   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-110910 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.81742823s)
functional_test.go:776: restart took 36.817555627s for "functional-110910" cluster.
I1126 19:45:41.097988   11003 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-110910 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-110910 logs: (1.358596949s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 logs --file /tmp/TestFunctionalserialLogsFileCmd2080655465/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-110910 logs --file /tmp/TestFunctionalserialLogsFileCmd2080655465/001/logs.txt: (1.37030822s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-110910 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-110910
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-110910: exit status 115 (236.425215ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.103:30955 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-110910 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 config get cpus: exit status 14 (60.15066ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 config get cpus: exit status 14 (65.883706ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-110910 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-110910 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 16934: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110910 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-110910 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (111.870406ms)

                                                
                                                
-- stdout --
	* [functional-110910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:45:59.170367   16854 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:45:59.170604   16854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:45:59.170614   16854 out.go:374] Setting ErrFile to fd 2...
	I1126 19:45:59.170618   16854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:45:59.170851   16854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 19:45:59.171272   16854 out.go:368] Setting JSON to false
	I1126 19:45:59.172040   16854 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1709,"bootTime":1764184650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:45:59.172099   16854 start.go:143] virtualization: kvm guest
	I1126 19:45:59.174143   16854 out.go:179] * [functional-110910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 19:45:59.175592   16854 notify.go:221] Checking for updates...
	I1126 19:45:59.175606   16854 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:45:59.176983   16854 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:45:59.178352   16854 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 19:45:59.179861   16854 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 19:45:59.181048   16854 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 19:45:59.182158   16854 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:45:59.183698   16854 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:45:59.184167   16854 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:45:59.214621   16854 out.go:179] * Using the kvm2 driver based on existing profile
	I1126 19:45:59.215887   16854 start.go:309] selected driver: kvm2
	I1126 19:45:59.215908   16854 start.go:927] validating driver "kvm2" against &{Name:functional-110910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-110910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:45:59.215998   16854 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:45:59.217938   16854 out.go:203] 
	W1126 19:45:59.218988   16854 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1126 19:45:59.220001   16854 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110910 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110910 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-110910 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (115.926144ms)

                                                
                                                
-- stdout --
	* [functional-110910] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:45:59.669776   16907 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:45:59.670125   16907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:45:59.670139   16907 out.go:374] Setting ErrFile to fd 2...
	I1126 19:45:59.670145   16907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:45:59.670574   16907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 19:45:59.671210   16907 out.go:368] Setting JSON to false
	I1126 19:45:59.672453   16907 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1710,"bootTime":1764184650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 19:45:59.672536   16907 start.go:143] virtualization: kvm guest
	I1126 19:45:59.674527   16907 out.go:179] * [functional-110910] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1126 19:45:59.675987   16907 notify.go:221] Checking for updates...
	I1126 19:45:59.676021   16907 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:45:59.677483   16907 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:45:59.678885   16907 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 19:45:59.680365   16907 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 19:45:59.681700   16907 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 19:45:59.682993   16907 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:45:59.684659   16907 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:45:59.685141   16907 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:45:59.715767   16907 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1126 19:45:59.716862   16907 start.go:309] selected driver: kvm2
	I1126 19:45:59.716875   16907 start.go:927] validating driver "kvm2" against &{Name:functional-110910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-110910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:45:59.716984   16907 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:45:59.718983   16907 out.go:203] 
	W1126 19:45:59.720316   16907 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1126 19:45:59.721606   16907 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-110910 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-110910 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-sm8vm" [4667647a-4793-49ac-8359-c16e19ecce12] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-sm8vm" [4667647a-4793-49ac-8359-c16e19ecce12] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.007230184s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.103:30216
functional_test.go:1680: http://192.168.39.103:30216: success! body:
Request served by hello-node-connect-7d85dfc575-sm8vm

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.103:30216
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e854184d-45b9-49d1-bdb5-53090360e0f3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004369598s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-110910 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-110910 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-110910 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-110910 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-110910 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [534e73cb-fd51-4998-94b0-d34ccde7aa2f] Pending
helpers_test.go:352: "sp-pod" [534e73cb-fd51-4998-94b0-d34ccde7aa2f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [534e73cb-fd51-4998-94b0-d34ccde7aa2f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.007349365s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-110910 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-110910 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-110910 delete -f testdata/storage-provisioner/pod.yaml: (5.001167608s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-110910 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2186040a-a44b-4e65-8435-9e317e08f770] Pending
helpers_test.go:352: "sp-pod" [2186040a-a44b-4e65-8435-9e317e08f770] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2186040a-a44b-4e65-8435-9e317e08f770] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004546117s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-110910 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh -n functional-110910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cp functional-110910:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1251968124/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh -n functional-110910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh -n functional-110910 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-110910 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-h6tg5" [cc911a79-bcbf-4068-b7c7-17925ffbd7cc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-h6tg5" [cc911a79-bcbf-4068-b7c7-17925ffbd7cc] Running
I1126 19:46:18.044878   11003 detect.go:223] nested VM detected
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.006346795s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-110910 exec mysql-5bb876957f-h6tg5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-110910 exec mysql-5bb876957f-h6tg5 -- mysql -ppassword -e "show databases;": exit status 1 (231.59238ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1126 19:46:24.144292   11003 retry.go:31] will retry after 515.083539ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-110910 exec mysql-5bb876957f-h6tg5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-110910 exec mysql-5bb876957f-h6tg5 -- mysql -ppassword -e "show databases;": exit status 1 (309.35089ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1126 19:46:24.969125   11003 retry.go:31] will retry after 1.747090788s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-110910 exec mysql-5bb876957f-h6tg5 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-110910 exec mysql-5bb876957f-h6tg5 -- mysql -ppassword -e "show databases;": exit status 1 (184.956717ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1126 19:46:26.902319   11003 retry.go:31] will retry after 1.412988555s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-110910 exec mysql-5bb876957f-h6tg5 -- mysql -ppassword -e "show databases;"
2025/11/26 19:46:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (28.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/11003/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo cat /etc/test/nested/copy/11003/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/11003.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo cat /etc/ssl/certs/11003.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/11003.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo cat /usr/share/ca-certificates/11003.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo cat /etc/ssl/certs/51391683.0"
I1126 19:45:57.683671   11003 detect.go:223] nested VM detected
functional_test.go:2004: Checking for existence of /etc/ssl/certs/110032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo cat /etc/ssl/certs/110032.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/110032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo cat /usr/share/ca-certificates/110032.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-110910 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
I1126 19:45:54.772029   11003 retry.go:31] will retry after 2.689818656s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:911dad56-5cf8-4dee-9d15-480ed6667963 ResourceVersion:766 Generation:0 CreationTimestamp:2025-11-26 19:45:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-911dad56-5cf8-4dee-9d15-480ed6667963 StorageClassName:0xc001ca6c40 VolumeMode:0xc001ca6c50 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 ssh "sudo systemctl is-active docker": exit status 1 (201.228746ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 ssh "sudo systemctl is-active containerd": exit status 1 (196.525691ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110910 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-110910
localhost/kicbase/echo-server:functional-110910
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110910 image ls --format short --alsologtostderr:
I1126 19:46:08.780731   17244 out.go:360] Setting OutFile to fd 1 ...
I1126 19:46:08.780962   17244 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:08.780969   17244 out.go:374] Setting ErrFile to fd 2...
I1126 19:46:08.780973   17244 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:08.781158   17244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
I1126 19:46:08.781665   17244 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:08.781755   17244 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:08.783740   17244 ssh_runner.go:195] Run: systemctl --version
I1126 19:46:08.785995   17244 main.go:143] libmachine: domain functional-110910 has defined MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:08.786503   17244 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c2:d3:38", ip: ""} in network mk-functional-110910: {Iface:virbr1 ExpiryTime:2025-11-26 20:43:11 +0000 UTC Type:0 Mac:52:54:00:c2:d3:38 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-110910 Clientid:01:52:54:00:c2:d3:38}
I1126 19:46:08.786530   17244 main.go:143] libmachine: domain functional-110910 has defined IP address 192.168.39.103 and MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:08.786700   17244 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/functional-110910/id_rsa Username:docker}
I1126 19:46:08.888145   17244 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110910 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-110910  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-110910  │ 3a86ecb0d098e │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ localhost/minikube-local-cache-test     │ functional-110910  │ d4eeb46bec5ac │ 3.33kB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110910 image ls --format table --alsologtostderr:
I1126 19:46:19.747392   17377 out.go:360] Setting OutFile to fd 1 ...
I1126 19:46:19.747630   17377 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:19.747637   17377 out.go:374] Setting ErrFile to fd 2...
I1126 19:46:19.747642   17377 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:19.747836   17377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
I1126 19:46:19.748357   17377 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:19.748441   17377 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:19.750756   17377 ssh_runner.go:195] Run: systemctl --version
I1126 19:46:19.753370   17377 main.go:143] libmachine: domain functional-110910 has defined MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:19.753847   17377 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c2:d3:38", ip: ""} in network mk-functional-110910: {Iface:virbr1 ExpiryTime:2025-11-26 20:43:11 +0000 UTC Type:0 Mac:52:54:00:c2:d3:38 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-110910 Clientid:01:52:54:00:c2:d3:38}
I1126 19:46:19.753875   17377 main.go:143] libmachine: domain functional-110910 has defined IP address 192.168.39.103 and MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:19.754045   17377 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/functional-110910/id_rsa Username:docker}
I1126 19:46:19.853284   17377 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110910 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30",
"repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-110910"],"size":"4945146"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a8
0125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077
"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"d6eff7568cf5e9c88ebeb4f07e39d5b102520d99e49e6839db883a11bb399478","repoDigests":["docker.io/library/80d20562ae10782aa2dba53323625facbaec90856cb6079d6e02022de0d11ccb-tmp@sha256:d58183ba824fc3b0d0123a9d175fc94114445ed76d1e47fe6ae1c7ba66b4537b"],"repoTags":[],"size":"1466016"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},
{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"3a86ecb0d098e3a395c735a58ce3ee92f845f964c7bf7e55a859cac91d3fb84c","repoDigests":["localhost/my-image@sha256:054169a42b6fd25200ffc8f0f5b20a69e7a067db95880f58da3dbedcd1b63441"],"repoTags":["localhost/my-image:functional-110910"],"size":"1468599"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e528
08a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d4eeb46bec5ac917778270a18eca53252379192f31236b67c0cc53bf4f8d67a1","repoDigests":["localhost/minikube-local-cache-test@sha256:c7dfc77e271c84ed46c33025962143093a231d20558f5c29c8510effbf94cf90"],"repoTags":["localhost/minikube-local-cache-test:functional-110910"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552
e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110910 image ls --format json --alsologtostderr:
I1126 19:46:19.455877   17367 out.go:360] Setting OutFile to fd 1 ...
I1126 19:46:19.456169   17367 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:19.456179   17367 out.go:374] Setting ErrFile to fd 2...
I1126 19:46:19.456186   17367 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:19.456398   17367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
I1126 19:46:19.456946   17367 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:19.457072   17367 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:19.459184   17367 ssh_runner.go:195] Run: systemctl --version
I1126 19:46:19.461290   17367 main.go:143] libmachine: domain functional-110910 has defined MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:19.461663   17367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c2:d3:38", ip: ""} in network mk-functional-110910: {Iface:virbr1 ExpiryTime:2025-11-26 20:43:11 +0000 UTC Type:0 Mac:52:54:00:c2:d3:38 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-110910 Clientid:01:52:54:00:c2:d3:38}
I1126 19:46:19.461693   17367 main.go:143] libmachine: domain functional-110910 has defined IP address 192.168.39.103 and MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:19.461856   17367 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/functional-110910/id_rsa Username:docker}
I1126 19:46:19.587419   17367 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110910 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-110910
size: "4945146"
- id: d4eeb46bec5ac917778270a18eca53252379192f31236b67c0cc53bf4f8d67a1
repoDigests:
- localhost/minikube-local-cache-test@sha256:c7dfc77e271c84ed46c33025962143093a231d20558f5c29c8510effbf94cf90
repoTags:
- localhost/minikube-local-cache-test:functional-110910
size: "3330"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110910 image ls --format yaml --alsologtostderr:
I1126 19:46:09.079279   17255 out.go:360] Setting OutFile to fd 1 ...
I1126 19:46:09.079540   17255 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:09.079551   17255 out.go:374] Setting ErrFile to fd 2...
I1126 19:46:09.079556   17255 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:09.079880   17255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
I1126 19:46:09.080569   17255 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:09.080667   17255 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:09.082776   17255 ssh_runner.go:195] Run: systemctl --version
I1126 19:46:09.085111   17255 main.go:143] libmachine: domain functional-110910 has defined MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:09.085616   17255 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c2:d3:38", ip: ""} in network mk-functional-110910: {Iface:virbr1 ExpiryTime:2025-11-26 20:43:11 +0000 UTC Type:0 Mac:52:54:00:c2:d3:38 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-110910 Clientid:01:52:54:00:c2:d3:38}
I1126 19:46:09.085644   17255 main.go:143] libmachine: domain functional-110910 has defined IP address 192.168.39.103 and MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:09.085800   17255 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/functional-110910/id_rsa Username:docker}
I1126 19:46:09.201146   17255 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 ssh pgrep buildkitd: exit status 1 (183.9781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image build -t localhost/my-image:functional-110910 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-110910 image build -t localhost/my-image:functional-110910 testdata/build --alsologtostderr: (9.24676326s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110910 image build -t localhost/my-image:functional-110910 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d6eff7568cf
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-110910
--> 3a86ecb0d09
Successfully tagged localhost/my-image:functional-110910
3a86ecb0d098e3a395c735a58ce3ee92f845f964c7bf7e55a859cac91d3fb84c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110910 image build -t localhost/my-image:functional-110910 testdata/build --alsologtostderr:
I1126 19:46:09.886017   17276 out.go:360] Setting OutFile to fd 1 ...
I1126 19:46:09.886187   17276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:09.886198   17276 out.go:374] Setting ErrFile to fd 2...
I1126 19:46:09.886201   17276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:46:09.886378   17276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
I1126 19:46:09.886912   17276 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:09.887470   17276 config.go:182] Loaded profile config "functional-110910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:46:09.889560   17276 ssh_runner.go:195] Run: systemctl --version
I1126 19:46:09.892067   17276 main.go:143] libmachine: domain functional-110910 has defined MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:09.892532   17276 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c2:d3:38", ip: ""} in network mk-functional-110910: {Iface:virbr1 ExpiryTime:2025-11-26 20:43:11 +0000 UTC Type:0 Mac:52:54:00:c2:d3:38 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-110910 Clientid:01:52:54:00:c2:d3:38}
I1126 19:46:09.892567   17276 main.go:143] libmachine: domain functional-110910 has defined IP address 192.168.39.103 and MAC address 52:54:00:c2:d3:38 in network mk-functional-110910
I1126 19:46:09.892744   17276 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/functional-110910/id_rsa Username:docker}
I1126 19:46:09.989913   17276 build_images.go:162] Building image from path: /tmp/build.2014896299.tar
I1126 19:46:09.989976   17276 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1126 19:46:10.013602   17276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2014896299.tar
I1126 19:46:10.023225   17276 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2014896299.tar: stat -c "%s %y" /var/lib/minikube/build/build.2014896299.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2014896299.tar': No such file or directory
I1126 19:46:10.023265   17276 ssh_runner.go:362] scp /tmp/build.2014896299.tar --> /var/lib/minikube/build/build.2014896299.tar (3072 bytes)
I1126 19:46:10.095197   17276 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2014896299
I1126 19:46:10.115866   17276 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2014896299 -xf /var/lib/minikube/build/build.2014896299.tar
I1126 19:46:10.139980   17276 crio.go:315] Building image: /var/lib/minikube/build/build.2014896299
I1126 19:46:10.140064   17276 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-110910 /var/lib/minikube/build/build.2014896299 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1126 19:46:19.020122   17276 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-110910 /var/lib/minikube/build/build.2014896299 --cgroup-manager=cgroupfs: (8.880030583s)
I1126 19:46:19.020200   17276 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2014896299
I1126 19:46:19.045043   17276 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2014896299.tar
I1126 19:46:19.067615   17276 build_images.go:218] Built localhost/my-image:functional-110910 from /tmp/build.2014896299.tar
I1126 19:46:19.067666   17276 build_images.go:134] succeeded building to: functional-110910
I1126 19:46:19.067673   17276 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-110910
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-110910 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-110910 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6lfj4" [3ac2cddd-92be-4ab0-b193-e546ee1d4493] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-6lfj4" [3ac2cddd-92be-4ab0-b193-e546ee1d4493] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004584127s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image load --daemon kicbase/echo-server:functional-110910 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-110910 image load --daemon kicbase/echo-server:functional-110910 --alsologtostderr: (1.136507722s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image load --daemon kicbase/echo-server:functional-110910 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-110910
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image load --daemon kicbase/echo-server:functional-110910 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image save kicbase/echo-server:functional-110910 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image rm kicbase/echo-server:functional-110910 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-110910
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 image save --daemon kicbase/echo-server:functional-110910 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-110910
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "249.054008ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.696017ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "236.918588ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.179763ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdany-port4094230444/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764186355869849140" to /tmp/TestFunctionalparallelMountCmdany-port4094230444/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764186355869849140" to /tmp/TestFunctionalparallelMountCmdany-port4094230444/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764186355869849140" to /tmp/TestFunctionalparallelMountCmdany-port4094230444/001/test-1764186355869849140
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.259993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:45:56.030427   11003 retry.go:31] will retry after 432.152002ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 26 19:45 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 26 19:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 26 19:45 test-1764186355869849140
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh cat /mount-9p/test-1764186355869849140
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-110910 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [2baf1b3b-f1dd-465a-bb32-5b1e2f45a5e8] Pending
helpers_test.go:352: "busybox-mount" [2baf1b3b-f1dd-465a-bb32-5b1e2f45a5e8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [2baf1b3b-f1dd-465a-bb32-5b1e2f45a5e8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [2baf1b3b-f1dd-465a-bb32-5b1e2f45a5e8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004120693s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-110910 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdany-port4094230444/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 service list -o json
functional_test.go:1504: Took "232.450197ms" to run "out/minikube-linux-amd64 -p functional-110910 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.103:31940
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.103:31940
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdspecific-port2988511201/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.898964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:46:04.035675   11003 retry.go:31] will retry after 597.62873ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdspecific-port2988511201/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 ssh "sudo umount -f /mount-9p": exit status 1 (172.218964ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-110910 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdspecific-port2988511201/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup333232125/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup333232125/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup333232125/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T" /mount1: exit status 1 (187.669282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:46:05.553807   11003 retry.go:31] will retry after 495.034509ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-110910 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-110910 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup333232125/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup333232125/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup333232125/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-110910
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-110910
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-110910
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1126 19:47:21.020748   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:47:48.729207   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m28.514445487s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (209.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 kubectl -- rollout status deployment/busybox: (3.558287618s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-gh6gb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-qqpk2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-swhhq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-gh6gb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-qqpk2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-swhhq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-gh6gb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-qqpk2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-swhhq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-gh6gb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-gh6gb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-qqpk2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-qqpk2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-swhhq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 kubectl -- exec busybox-7b57f96db7-swhhq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 node add --alsologtostderr -v 5
E1126 19:50:48.493859   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:48.500283   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:48.511660   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:48.533127   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:48.574562   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:48.656107   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:48.817664   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:49.139389   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:49.781456   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:51.063508   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:50:53.625231   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 node add --alsologtostderr -v 5: (43.994878735s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-959602 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp testdata/cp-test.txt ha-959602:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3047555470/001/cp-test_ha-959602.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602:/home/docker/cp-test.txt ha-959602-m02:/home/docker/cp-test_ha-959602_ha-959602-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m02 "sudo cat /home/docker/cp-test_ha-959602_ha-959602-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602:/home/docker/cp-test.txt ha-959602-m03:/home/docker/cp-test_ha-959602_ha-959602-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m03 "sudo cat /home/docker/cp-test_ha-959602_ha-959602-m03.txt"
E1126 19:50:58.747020   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602:/home/docker/cp-test.txt ha-959602-m04:/home/docker/cp-test_ha-959602_ha-959602-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m04 "sudo cat /home/docker/cp-test_ha-959602_ha-959602-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp testdata/cp-test.txt ha-959602-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3047555470/001/cp-test_ha-959602-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m02:/home/docker/cp-test.txt ha-959602:/home/docker/cp-test_ha-959602-m02_ha-959602.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602 "sudo cat /home/docker/cp-test_ha-959602-m02_ha-959602.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m02:/home/docker/cp-test.txt ha-959602-m03:/home/docker/cp-test_ha-959602-m02_ha-959602-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m03 "sudo cat /home/docker/cp-test_ha-959602-m02_ha-959602-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m02:/home/docker/cp-test.txt ha-959602-m04:/home/docker/cp-test_ha-959602-m02_ha-959602-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m04 "sudo cat /home/docker/cp-test_ha-959602-m02_ha-959602-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp testdata/cp-test.txt ha-959602-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3047555470/001/cp-test_ha-959602-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m03:/home/docker/cp-test.txt ha-959602:/home/docker/cp-test_ha-959602-m03_ha-959602.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602 "sudo cat /home/docker/cp-test_ha-959602-m03_ha-959602.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m03:/home/docker/cp-test.txt ha-959602-m02:/home/docker/cp-test_ha-959602-m03_ha-959602-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m02 "sudo cat /home/docker/cp-test_ha-959602-m03_ha-959602-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m03:/home/docker/cp-test.txt ha-959602-m04:/home/docker/cp-test_ha-959602-m03_ha-959602-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m04 "sudo cat /home/docker/cp-test_ha-959602-m03_ha-959602-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp testdata/cp-test.txt ha-959602-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3047555470/001/cp-test_ha-959602-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m04:/home/docker/cp-test.txt ha-959602:/home/docker/cp-test_ha-959602-m04_ha-959602.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602 "sudo cat /home/docker/cp-test_ha-959602-m04_ha-959602.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m04:/home/docker/cp-test.txt ha-959602-m02:/home/docker/cp-test_ha-959602-m04_ha-959602-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m02 "sudo cat /home/docker/cp-test_ha-959602-m04_ha-959602-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 cp ha-959602-m04:/home/docker/cp-test.txt ha-959602-m03:/home/docker/cp-test_ha-959602-m04_ha-959602-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 ssh -n ha-959602-m03 "sudo cat /home/docker/cp-test_ha-959602-m04_ha-959602-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (75.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 node stop m02 --alsologtostderr -v 5
E1126 19:51:08.988361   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:51:29.470634   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:52:10.432407   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:52:21.022350   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 node stop m02 --alsologtostderr -v 5: (1m14.688538992s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5: exit status 7 (508.567187ms)

                                                
                                                
-- stdout --
	ha-959602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-959602-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-959602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-959602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:52:21.706250   20430 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:52:21.706492   20430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:52:21.706505   20430 out.go:374] Setting ErrFile to fd 2...
	I1126 19:52:21.706512   20430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:52:21.706771   20430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 19:52:21.706975   20430 out.go:368] Setting JSON to false
	I1126 19:52:21.707008   20430 mustload.go:66] Loading cluster: ha-959602
	I1126 19:52:21.707103   20430 notify.go:221] Checking for updates...
	I1126 19:52:21.707553   20430 config.go:182] Loaded profile config "ha-959602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:52:21.707574   20430 status.go:174] checking status of ha-959602 ...
	I1126 19:52:21.709558   20430 status.go:371] ha-959602 host status = "Running" (err=<nil>)
	I1126 19:52:21.709582   20430 host.go:66] Checking if "ha-959602" exists ...
	I1126 19:52:21.712161   20430 main.go:143] libmachine: domain ha-959602 has defined MAC address 52:54:00:e7:93:19 in network mk-ha-959602
	I1126 19:52:21.712675   20430 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:93:19", ip: ""} in network mk-ha-959602: {Iface:virbr1 ExpiryTime:2025-11-26 20:46:50 +0000 UTC Type:0 Mac:52:54:00:e7:93:19 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-959602 Clientid:01:52:54:00:e7:93:19}
	I1126 19:52:21.712699   20430 main.go:143] libmachine: domain ha-959602 has defined IP address 192.168.39.245 and MAC address 52:54:00:e7:93:19 in network mk-ha-959602
	I1126 19:52:21.712822   20430 host.go:66] Checking if "ha-959602" exists ...
	I1126 19:52:21.712990   20430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:52:21.715471   20430 main.go:143] libmachine: domain ha-959602 has defined MAC address 52:54:00:e7:93:19 in network mk-ha-959602
	I1126 19:52:21.715935   20430 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:93:19", ip: ""} in network mk-ha-959602: {Iface:virbr1 ExpiryTime:2025-11-26 20:46:50 +0000 UTC Type:0 Mac:52:54:00:e7:93:19 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-959602 Clientid:01:52:54:00:e7:93:19}
	I1126 19:52:21.715966   20430 main.go:143] libmachine: domain ha-959602 has defined IP address 192.168.39.245 and MAC address 52:54:00:e7:93:19 in network mk-ha-959602
	I1126 19:52:21.716141   20430 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/ha-959602/id_rsa Username:docker}
	I1126 19:52:21.807059   20430 ssh_runner.go:195] Run: systemctl --version
	I1126 19:52:21.815146   20430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:52:21.834518   20430 kubeconfig.go:125] found "ha-959602" server: "https://192.168.39.254:8443"
	I1126 19:52:21.834550   20430 api_server.go:166] Checking apiserver status ...
	I1126 19:52:21.834589   20430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:52:21.857770   20430 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1435/cgroup
	W1126 19:52:21.870124   20430 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1435/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1126 19:52:21.870192   20430 ssh_runner.go:195] Run: ls
	I1126 19:52:21.876240   20430 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1126 19:52:21.882129   20430 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1126 19:52:21.882156   20430 status.go:463] ha-959602 apiserver status = Running (err=<nil>)
	I1126 19:52:21.882167   20430 status.go:176] ha-959602 status: &{Name:ha-959602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 19:52:21.882186   20430 status.go:174] checking status of ha-959602-m02 ...
	I1126 19:52:21.883851   20430 status.go:371] ha-959602-m02 host status = "Stopped" (err=<nil>)
	I1126 19:52:21.883873   20430 status.go:384] host is not running, skipping remaining checks
	I1126 19:52:21.883880   20430 status.go:176] ha-959602-m02 status: &{Name:ha-959602-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 19:52:21.883899   20430 status.go:174] checking status of ha-959602-m03 ...
	I1126 19:52:21.885310   20430 status.go:371] ha-959602-m03 host status = "Running" (err=<nil>)
	I1126 19:52:21.885326   20430 host.go:66] Checking if "ha-959602-m03" exists ...
	I1126 19:52:21.887556   20430 main.go:143] libmachine: domain ha-959602-m03 has defined MAC address 52:54:00:23:fe:8c in network mk-ha-959602
	I1126 19:52:21.887934   20430 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:fe:8c", ip: ""} in network mk-ha-959602: {Iface:virbr1 ExpiryTime:2025-11-26 20:48:51 +0000 UTC Type:0 Mac:52:54:00:23:fe:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-959602-m03 Clientid:01:52:54:00:23:fe:8c}
	I1126 19:52:21.887965   20430 main.go:143] libmachine: domain ha-959602-m03 has defined IP address 192.168.39.246 and MAC address 52:54:00:23:fe:8c in network mk-ha-959602
	I1126 19:52:21.888111   20430 host.go:66] Checking if "ha-959602-m03" exists ...
	I1126 19:52:21.888306   20430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:52:21.890235   20430 main.go:143] libmachine: domain ha-959602-m03 has defined MAC address 52:54:00:23:fe:8c in network mk-ha-959602
	I1126 19:52:21.890571   20430 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:fe:8c", ip: ""} in network mk-ha-959602: {Iface:virbr1 ExpiryTime:2025-11-26 20:48:51 +0000 UTC Type:0 Mac:52:54:00:23:fe:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-959602-m03 Clientid:01:52:54:00:23:fe:8c}
	I1126 19:52:21.890604   20430 main.go:143] libmachine: domain ha-959602-m03 has defined IP address 192.168.39.246 and MAC address 52:54:00:23:fe:8c in network mk-ha-959602
	I1126 19:52:21.890739   20430 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/ha-959602-m03/id_rsa Username:docker}
	I1126 19:52:21.977448   20430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:52:21.997948   20430 kubeconfig.go:125] found "ha-959602" server: "https://192.168.39.254:8443"
	I1126 19:52:21.998005   20430 api_server.go:166] Checking apiserver status ...
	I1126 19:52:21.998047   20430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:52:22.020899   20430 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1762/cgroup
	W1126 19:52:22.034677   20430 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1126 19:52:22.034761   20430 ssh_runner.go:195] Run: ls
	I1126 19:52:22.041328   20430 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1126 19:52:22.046563   20430 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1126 19:52:22.046593   20430 status.go:463] ha-959602-m03 apiserver status = Running (err=<nil>)
	I1126 19:52:22.046604   20430 status.go:176] ha-959602-m03 status: &{Name:ha-959602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 19:52:22.046622   20430 status.go:174] checking status of ha-959602-m04 ...
	I1126 19:52:22.048365   20430 status.go:371] ha-959602-m04 host status = "Running" (err=<nil>)
	I1126 19:52:22.048388   20430 host.go:66] Checking if "ha-959602-m04" exists ...
	I1126 19:52:22.051072   20430 main.go:143] libmachine: domain ha-959602-m04 has defined MAC address 52:54:00:b2:2b:4b in network mk-ha-959602
	I1126 19:52:22.051516   20430 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:2b:4b", ip: ""} in network mk-ha-959602: {Iface:virbr1 ExpiryTime:2025-11-26 20:50:27 +0000 UTC Type:0 Mac:52:54:00:b2:2b:4b Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-959602-m04 Clientid:01:52:54:00:b2:2b:4b}
	I1126 19:52:22.051539   20430 main.go:143] libmachine: domain ha-959602-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:b2:2b:4b in network mk-ha-959602
	I1126 19:52:22.051696   20430 host.go:66] Checking if "ha-959602-m04" exists ...
	I1126 19:52:22.051957   20430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:52:22.054318   20430 main.go:143] libmachine: domain ha-959602-m04 has defined MAC address 52:54:00:b2:2b:4b in network mk-ha-959602
	I1126 19:52:22.054758   20430 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:2b:4b", ip: ""} in network mk-ha-959602: {Iface:virbr1 ExpiryTime:2025-11-26 20:50:27 +0000 UTC Type:0 Mac:52:54:00:b2:2b:4b Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-959602-m04 Clientid:01:52:54:00:b2:2b:4b}
	I1126 19:52:22.054793   20430 main.go:143] libmachine: domain ha-959602-m04 has defined IP address 192.168.39.206 and MAC address 52:54:00:b2:2b:4b in network mk-ha-959602
	I1126 19:52:22.054945   20430 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/ha-959602-m04/id_rsa Username:docker}
	I1126 19:52:22.138204   20430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:52:22.157513   20430 status.go:176] ha-959602-m04 status: &{Name:ha-959602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (75.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 node start m02 --alsologtostderr -v 5: (40.098794582s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 stop --alsologtostderr -v 5
E1126 19:53:32.354104   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:55:48.497710   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:56:16.197359   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:57:21.022244   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 stop --alsologtostderr -v 5: (4m22.378255721s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 start --wait true --alsologtostderr -v 5
E1126 19:58:44.091517   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 start --wait true --alsologtostderr -v 5: (2m2.990042024s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 node delete m03 --alsologtostderr -v 5: (18.237604964s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (257.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 stop --alsologtostderr -v 5
E1126 20:00:48.495020   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:02:21.020568   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 stop --alsologtostderr -v 5: (4m17.095837959s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5: exit status 7 (63.801666ms)

                                                
                                                
-- stdout --
	ha-959602
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-959602-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-959602-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:04:06.859212   23763 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:04:06.859477   23763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:04:06.859487   23763 out.go:374] Setting ErrFile to fd 2...
	I1126 20:04:06.859491   23763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:04:06.859704   23763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 20:04:06.859866   23763 out.go:368] Setting JSON to false
	I1126 20:04:06.859889   23763 mustload.go:66] Loading cluster: ha-959602
	I1126 20:04:06.860011   23763 notify.go:221] Checking for updates...
	I1126 20:04:06.860818   23763 config.go:182] Loaded profile config "ha-959602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:04:06.860848   23763 status.go:174] checking status of ha-959602 ...
	I1126 20:04:06.863650   23763 status.go:371] ha-959602 host status = "Stopped" (err=<nil>)
	I1126 20:04:06.863667   23763 status.go:384] host is not running, skipping remaining checks
	I1126 20:04:06.863672   23763 status.go:176] ha-959602 status: &{Name:ha-959602 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:04:06.863688   23763 status.go:174] checking status of ha-959602-m02 ...
	I1126 20:04:06.864847   23763 status.go:371] ha-959602-m02 host status = "Stopped" (err=<nil>)
	I1126 20:04:06.864861   23763 status.go:384] host is not running, skipping remaining checks
	I1126 20:04:06.864866   23763 status.go:176] ha-959602-m02 status: &{Name:ha-959602-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:04:06.864877   23763 status.go:174] checking status of ha-959602-m04 ...
	I1126 20:04:06.866209   23763 status.go:371] ha-959602-m04 host status = "Stopped" (err=<nil>)
	I1126 20:04:06.866224   23763 status.go:384] host is not running, skipping remaining checks
	I1126 20:04:06.866230   23763 status.go:176] ha-959602-m04 status: &{Name:ha-959602-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (257.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (96.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m35.406590137s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (96.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 node add --control-plane --alsologtostderr -v 5
E1126 20:05:48.493523   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-959602 node add --control-plane --alsologtostderr -v 5: (1m22.790874597s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-959602 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-781649 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1126 20:07:11.559314   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:07:21.021865   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-781649 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.911881682s)
--- PASS: TestJSONOutput/start/Command (88.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-781649 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-781649 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.52s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-781649 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-781649 --output=json --user=testUser: (7.5157623s)
--- PASS: TestJSONOutput/stop/Command (7.52s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-886430 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-886430 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.128134ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6e20bb98-46ac-44d9-a5f9-0cb2afa3735f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-886430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0644dd7e-76e7-4a29-a5c3-9a22d2ce2506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21974"}}
	{"specversion":"1.0","id":"8a16fd58-30cd-4ca5-8d93-441c726b8503","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6843c9d8-9199-4a63-b4c6-36030e5faaa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig"}}
	{"specversion":"1.0","id":"b5eb0ada-e6de-4cc5-8780-8e551dd35d57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube"}}
	{"specversion":"1.0","id":"559abdaf-af9f-47d4-af5a-d7124074fa25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dc145422-ff1d-4cd8-8c44-630ea0698e94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9322c6da-a932-4a39-9fdb-43a6d6fc94f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-886430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-886430
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (81.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-590376 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-590376 --driver=kvm2  --container-runtime=crio: (38.98769928s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-592800 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-592800 --driver=kvm2  --container-runtime=crio: (39.408735591s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-590376
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-592800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-592800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-592800
helpers_test.go:175: Cleaning up "first-590376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-590376
--- PASS: TestMinikubeProfile (81.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-976318 --memory=3072 --mount-string /tmp/TestMountStartserial2981293476/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-976318 --memory=3072 --mount-string /tmp/TestMountStartserial2981293476/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.358521391s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-976318 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-976318 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (22.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-988986 --memory=3072 --mount-string /tmp/TestMountStartserial2981293476/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1126 20:10:48.500054   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-988986 --memory=3072 --mount-string /tmp/TestMountStartserial2981293476/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.099893392s)
--- PASS: TestMountStart/serial/StartWithMountSecond (22.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988986 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988986 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-976318 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988986 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988986 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-988986
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-988986: (1.332537933s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-988986
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-988986: (19.740497569s)
--- PASS: TestMountStart/serial/RestartStopped (20.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988986 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988986 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-230981 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1126 20:12:21.020878   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-230981 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m6.734193639s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-230981 -- rollout status deployment/busybox: (3.834240078s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-8clxc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-dm2xb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-8clxc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-dm2xb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-8clxc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-dm2xb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-8clxc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-8clxc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-dm2xb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-230981 -- exec busybox-7b57f96db7-dm2xb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-230981 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-230981 -v=5 --alsologtostderr: (44.586174535s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-230981 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp testdata/cp-test.txt multinode-230981:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4159045823/001/cp-test_multinode-230981.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981:/home/docker/cp-test.txt multinode-230981-m02:/home/docker/cp-test_multinode-230981_multinode-230981-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m02 "sudo cat /home/docker/cp-test_multinode-230981_multinode-230981-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981:/home/docker/cp-test.txt multinode-230981-m03:/home/docker/cp-test_multinode-230981_multinode-230981-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m03 "sudo cat /home/docker/cp-test_multinode-230981_multinode-230981-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp testdata/cp-test.txt multinode-230981-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4159045823/001/cp-test_multinode-230981-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981-m02:/home/docker/cp-test.txt multinode-230981:/home/docker/cp-test_multinode-230981-m02_multinode-230981.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981 "sudo cat /home/docker/cp-test_multinode-230981-m02_multinode-230981.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981-m02:/home/docker/cp-test.txt multinode-230981-m03:/home/docker/cp-test_multinode-230981-m02_multinode-230981-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m03 "sudo cat /home/docker/cp-test_multinode-230981-m02_multinode-230981-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp testdata/cp-test.txt multinode-230981-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4159045823/001/cp-test_multinode-230981-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981-m03:/home/docker/cp-test.txt multinode-230981:/home/docker/cp-test_multinode-230981-m03_multinode-230981.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981 "sudo cat /home/docker/cp-test_multinode-230981-m03_multinode-230981.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 cp multinode-230981-m03:/home/docker/cp-test.txt multinode-230981-m02:/home/docker/cp-test_multinode-230981-m03_multinode-230981-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 ssh -n multinode-230981-m02 "sudo cat /home/docker/cp-test_multinode-230981-m03_multinode-230981-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-230981 node stop m03: (1.8393137s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-230981 status: exit status 7 (330.891066ms)

                                                
                                                
-- stdout --
	multinode-230981
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-230981-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-230981-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-230981 status --alsologtostderr: exit status 7 (337.093692ms)

                                                
                                                
-- stdout --
	multinode-230981
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-230981-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-230981-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:14:26.755822   29463 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:14:26.755927   29463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:14:26.755935   29463 out.go:374] Setting ErrFile to fd 2...
	I1126 20:14:26.755940   29463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:14:26.756175   29463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 20:14:26.756319   29463 out.go:368] Setting JSON to false
	I1126 20:14:26.756341   29463 mustload.go:66] Loading cluster: multinode-230981
	I1126 20:14:26.756404   29463 notify.go:221] Checking for updates...
	I1126 20:14:26.756711   29463 config.go:182] Loaded profile config "multinode-230981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:14:26.756724   29463 status.go:174] checking status of multinode-230981 ...
	I1126 20:14:26.759670   29463 status.go:371] multinode-230981 host status = "Running" (err=<nil>)
	I1126 20:14:26.759690   29463 host.go:66] Checking if "multinode-230981" exists ...
	I1126 20:14:26.762450   29463 main.go:143] libmachine: domain multinode-230981 has defined MAC address 52:54:00:d1:cc:00 in network mk-multinode-230981
	I1126 20:14:26.762888   29463 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:cc:00", ip: ""} in network mk-multinode-230981: {Iface:virbr1 ExpiryTime:2025-11-26 21:11:35 +0000 UTC Type:0 Mac:52:54:00:d1:cc:00 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-230981 Clientid:01:52:54:00:d1:cc:00}
	I1126 20:14:26.762921   29463 main.go:143] libmachine: domain multinode-230981 has defined IP address 192.168.39.61 and MAC address 52:54:00:d1:cc:00 in network mk-multinode-230981
	I1126 20:14:26.763107   29463 host.go:66] Checking if "multinode-230981" exists ...
	I1126 20:14:26.763353   29463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:14:26.765713   29463 main.go:143] libmachine: domain multinode-230981 has defined MAC address 52:54:00:d1:cc:00 in network mk-multinode-230981
	I1126 20:14:26.766182   29463 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:cc:00", ip: ""} in network mk-multinode-230981: {Iface:virbr1 ExpiryTime:2025-11-26 21:11:35 +0000 UTC Type:0 Mac:52:54:00:d1:cc:00 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:multinode-230981 Clientid:01:52:54:00:d1:cc:00}
	I1126 20:14:26.766214   29463 main.go:143] libmachine: domain multinode-230981 has defined IP address 192.168.39.61 and MAC address 52:54:00:d1:cc:00 in network mk-multinode-230981
	I1126 20:14:26.766350   29463 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/multinode-230981/id_rsa Username:docker}
	I1126 20:14:26.850303   29463 ssh_runner.go:195] Run: systemctl --version
	I1126 20:14:26.858357   29463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:14:26.879563   29463 kubeconfig.go:125] found "multinode-230981" server: "https://192.168.39.61:8443"
	I1126 20:14:26.879602   29463 api_server.go:166] Checking apiserver status ...
	I1126 20:14:26.879641   29463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:26.902809   29463 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1381/cgroup
	W1126 20:14:26.917141   29463 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1381/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:14:26.917191   29463 ssh_runner.go:195] Run: ls
	I1126 20:14:26.922435   29463 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I1126 20:14:26.927050   29463 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I1126 20:14:26.927074   29463 status.go:463] multinode-230981 apiserver status = Running (err=<nil>)
	I1126 20:14:26.927095   29463 status.go:176] multinode-230981 status: &{Name:multinode-230981 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:14:26.927120   29463 status.go:174] checking status of multinode-230981-m02 ...
	I1126 20:14:26.928664   29463 status.go:371] multinode-230981-m02 host status = "Running" (err=<nil>)
	I1126 20:14:26.928680   29463 host.go:66] Checking if "multinode-230981-m02" exists ...
	I1126 20:14:26.931207   29463 main.go:143] libmachine: domain multinode-230981-m02 has defined MAC address 52:54:00:81:3e:e7 in network mk-multinode-230981
	I1126 20:14:26.931606   29463 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:81:3e:e7", ip: ""} in network mk-multinode-230981: {Iface:virbr1 ExpiryTime:2025-11-26 21:13:00 +0000 UTC Type:0 Mac:52:54:00:81:3e:e7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:multinode-230981-m02 Clientid:01:52:54:00:81:3e:e7}
	I1126 20:14:26.931642   29463 main.go:143] libmachine: domain multinode-230981-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:81:3e:e7 in network mk-multinode-230981
	I1126 20:14:26.931756   29463 host.go:66] Checking if "multinode-230981-m02" exists ...
	I1126 20:14:26.931943   29463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:14:26.933959   29463 main.go:143] libmachine: domain multinode-230981-m02 has defined MAC address 52:54:00:81:3e:e7 in network mk-multinode-230981
	I1126 20:14:26.934283   29463 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:81:3e:e7", ip: ""} in network mk-multinode-230981: {Iface:virbr1 ExpiryTime:2025-11-26 21:13:00 +0000 UTC Type:0 Mac:52:54:00:81:3e:e7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:multinode-230981-m02 Clientid:01:52:54:00:81:3e:e7}
	I1126 20:14:26.934312   29463 main.go:143] libmachine: domain multinode-230981-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:81:3e:e7 in network mk-multinode-230981
	I1126 20:14:26.934439   29463 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/multinode-230981-m02/id_rsa Username:docker}
	I1126 20:14:27.016559   29463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:14:27.032711   29463 status.go:176] multinode-230981-m02 status: &{Name:multinode-230981-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:14:27.032739   29463 status.go:174] checking status of multinode-230981-m03 ...
	I1126 20:14:27.034378   29463 status.go:371] multinode-230981-m03 host status = "Stopped" (err=<nil>)
	I1126 20:14:27.034397   29463 status.go:384] host is not running, skipping remaining checks
	I1126 20:14:27.034405   29463 status.go:176] multinode-230981-m03 status: &{Name:multinode-230981-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-230981 node start m03 -v=5 --alsologtostderr: (37.612606742s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (297.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-230981
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-230981
E1126 20:15:24.095698   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:15:48.499781   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:17:21.022361   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-230981: (2m39.768549847s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-230981 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-230981 --wait=true -v=5 --alsologtostderr: (2m17.728727235s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-230981
--- PASS: TestMultiNode/serial/RestartKeepsNodes (297.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-230981 node delete m03: (2.116310307s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (175.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 stop
E1126 20:20:48.493309   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:22:21.021715   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-230981 stop: (2m55.363210825s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-230981 status: exit status 7 (61.531992ms)

                                                
                                                
-- stdout --
	multinode-230981
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-230981-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-230981 status --alsologtostderr: exit status 7 (61.872553ms)

                                                
                                                
-- stdout --
	multinode-230981
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-230981-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:23:00.845004   32247 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:23:00.845251   32247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:00.845260   32247 out.go:374] Setting ErrFile to fd 2...
	I1126 20:23:00.845264   32247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:23:00.845436   32247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 20:23:00.845589   32247 out.go:368] Setting JSON to false
	I1126 20:23:00.845613   32247 mustload.go:66] Loading cluster: multinode-230981
	I1126 20:23:00.845741   32247 notify.go:221] Checking for updates...
	I1126 20:23:00.845913   32247 config.go:182] Loaded profile config "multinode-230981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:23:00.845925   32247 status.go:174] checking status of multinode-230981 ...
	I1126 20:23:00.847720   32247 status.go:371] multinode-230981 host status = "Stopped" (err=<nil>)
	I1126 20:23:00.847735   32247 status.go:384] host is not running, skipping remaining checks
	I1126 20:23:00.847740   32247 status.go:176] multinode-230981 status: &{Name:multinode-230981 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:23:00.847761   32247 status.go:174] checking status of multinode-230981-m02 ...
	I1126 20:23:00.849066   32247 status.go:371] multinode-230981-m02 host status = "Stopped" (err=<nil>)
	I1126 20:23:00.849092   32247 status.go:384] host is not running, skipping remaining checks
	I1126 20:23:00.849099   32247 status.go:176] multinode-230981-m02 status: &{Name:multinode-230981-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (175.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (98.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-230981 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1126 20:23:51.561658   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-230981 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m37.521318615s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-230981 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (98.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-230981
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-230981-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-230981-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (74.108228ms)

                                                
                                                
-- stdout --
	* [multinode-230981-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-230981-m02' is duplicated with machine name 'multinode-230981-m02' in profile 'multinode-230981'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-230981-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-230981-m03 --driver=kvm2  --container-runtime=crio: (39.835904769s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-230981
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-230981: exit status 80 (205.071168ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-230981 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-230981-m03 already exists in multinode-230981-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-230981-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.04s)

                                                
                                    
x
+
TestScheduledStopUnix (111s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-714939 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-714939 --memory=3072 --driver=kvm2  --container-runtime=crio: (39.36583304s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-714939 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:28:31.270155   34608 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:28:31.270388   34608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:28:31.270396   34608 out.go:374] Setting ErrFile to fd 2...
	I1126 20:28:31.270399   34608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:28:31.270561   34608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 20:28:31.270798   34608 out.go:368] Setting JSON to false
	I1126 20:28:31.270876   34608 mustload.go:66] Loading cluster: scheduled-stop-714939
	I1126 20:28:31.271250   34608 config.go:182] Loaded profile config "scheduled-stop-714939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:28:31.271314   34608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/config.json ...
	I1126 20:28:31.271510   34608 mustload.go:66] Loading cluster: scheduled-stop-714939
	I1126 20:28:31.271613   34608 config.go:182] Loaded profile config "scheduled-stop-714939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-714939 -n scheduled-stop-714939
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-714939 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:28:31.554875   34653 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:28:31.554976   34653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:28:31.554987   34653 out.go:374] Setting ErrFile to fd 2...
	I1126 20:28:31.554993   34653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:28:31.555257   34653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 20:28:31.555518   34653 out.go:368] Setting JSON to false
	I1126 20:28:31.555739   34653 daemonize_unix.go:73] killing process 34643 as it is an old scheduled stop
	I1126 20:28:31.555846   34653 mustload.go:66] Loading cluster: scheduled-stop-714939
	I1126 20:28:31.556344   34653 config.go:182] Loaded profile config "scheduled-stop-714939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:28:31.556443   34653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/config.json ...
	I1126 20:28:31.556677   34653 mustload.go:66] Loading cluster: scheduled-stop-714939
	I1126 20:28:31.556835   34653 config.go:182] Loaded profile config "scheduled-stop-714939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1126 20:28:31.561246   11003 retry.go:31] will retry after 145.463µs: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.562380   11003 retry.go:31] will retry after 127.509µs: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.563534   11003 retry.go:31] will retry after 301.262µs: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.564685   11003 retry.go:31] will retry after 311.251µs: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.565845   11003 retry.go:31] will retry after 472.461µs: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.566999   11003 retry.go:31] will retry after 866.436µs: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.568150   11003 retry.go:31] will retry after 789.795µs: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.569286   11003 retry.go:31] will retry after 1.590449ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.571480   11003 retry.go:31] will retry after 3.578149ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.575701   11003 retry.go:31] will retry after 2.948279ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.578937   11003 retry.go:31] will retry after 8.341474ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.588241   11003 retry.go:31] will retry after 7.637767ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.596457   11003 retry.go:31] will retry after 16.897624ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.613692   11003 retry.go:31] will retry after 20.372015ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.635017   11003 retry.go:31] will retry after 21.383263ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
I1126 20:28:31.657241   11003 retry.go:31] will retry after 44.506741ms: open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-714939 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-714939 -n scheduled-stop-714939
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-714939
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-714939 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:28:57.283059   34802 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:28:57.283302   34802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:28:57.283310   34802 out.go:374] Setting ErrFile to fd 2...
	I1126 20:28:57.283314   34802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:28:57.283505   34802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 20:28:57.283727   34802 out.go:368] Setting JSON to false
	I1126 20:28:57.283800   34802 mustload.go:66] Loading cluster: scheduled-stop-714939
	I1126 20:28:57.284114   34802 config.go:182] Loaded profile config "scheduled-stop-714939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:28:57.284175   34802 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/scheduled-stop-714939/config.json ...
	I1126 20:28:57.284354   34802 mustload.go:66] Loading cluster: scheduled-stop-714939
	I1126 20:28:57.284440   34802 config.go:182] Loaded profile config "scheduled-stop-714939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-714939
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-714939: exit status 7 (58.366708ms)

                                                
                                                
-- stdout --
	scheduled-stop-714939
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-714939 -n scheduled-stop-714939
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-714939 -n scheduled-stop-714939: exit status 7 (59.620561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-714939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-714939
--- PASS: TestScheduledStopUnix (111.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (382.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1719380722 start -p running-upgrade-733922 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1719380722 start -p running-upgrade-733922 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m1.287156662s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-733922 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1126 20:32:04.097275   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:32:21.022447   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-733922 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m19.612039156s)
helpers_test.go:175: Cleaning up "running-upgrade-733922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-733922
--- PASS: TestRunningBinaryUpgrade (382.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (210.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-995998 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-995998 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.085805171s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-995998
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-995998: (2.267142732s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-995998 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-995998 status --format={{.Host}}: exit status 7 (86.653536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-995998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-995998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.456502442s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-995998 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-995998 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-995998 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (103.985881ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-995998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-995998
	    minikube start -p kubernetes-upgrade-995998 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9959982 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-995998 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-995998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-995998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.230561071s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-995998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-995998
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-995998: (1.405212505s)
--- PASS: TestKubernetesUpgrade (210.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502314 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-502314 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (104.294051ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-502314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (82.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502314 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-502314 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.609000401s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-502314 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (82.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-309253 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-309253 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (110.040051ms)

                                                
                                                
-- stdout --
	* [false-309253] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:29:46.295757   35883 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:29:46.296017   35883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:29:46.296027   35883 out.go:374] Setting ErrFile to fd 2...
	I1126 20:29:46.296032   35883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:29:46.296223   35883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
	I1126 20:29:46.296654   35883 out.go:368] Setting JSON to false
	I1126 20:29:46.297491   35883 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4336,"bootTime":1764184650,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1126 20:29:46.297540   35883 start.go:143] virtualization: kvm guest
	I1126 20:29:46.299266   35883 out.go:179] * [false-309253] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1126 20:29:46.300556   35883 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:29:46.300555   35883 notify.go:221] Checking for updates...
	I1126 20:29:46.303126   35883 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:29:46.304382   35883 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
	I1126 20:29:46.305529   35883 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
	I1126 20:29:46.306645   35883 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1126 20:29:46.307844   35883 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:29:46.309469   35883 config.go:182] Loaded profile config "NoKubernetes-502314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:29:46.309558   35883 config.go:182] Loaded profile config "force-systemd-env-626886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:29:46.309663   35883 config.go:182] Loaded profile config "offline-crio-482178": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:29:46.309756   35883 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:29:46.344212   35883 out.go:179] * Using the kvm2 driver based on user configuration
	I1126 20:29:46.345466   35883 start.go:309] selected driver: kvm2
	I1126 20:29:46.345480   35883 start.go:927] validating driver "kvm2" against <nil>
	I1126 20:29:46.345490   35883 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:29:46.347284   35883 out.go:203] 
	W1126 20:29:46.348490   35883 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1126 20:29:46.349789   35883 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-309253 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-309253" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-309253

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-309253"

                                                
                                                
----------------------- debugLogs end: false-309253 [took: 3.125276149s] --------------------------------
helpers_test.go:175: Cleaning up "false-309253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-309253
--- PASS: TestNetworkPlugins/group/false (3.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (135.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1873420307 start -p stopped-upgrade-473452 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1126 20:30:48.493193   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1873420307 start -p stopped-upgrade-473452 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m18.126656439s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1873420307 -p stopped-upgrade-473452 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1873420307 -p stopped-upgrade-473452 stop: (1.904096123s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-473452 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-473452 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.716896028s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (135.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (48.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502314 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-502314 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.293838673s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-502314 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-502314 status -o json: exit status 2 (224.511017ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-502314","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-502314
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (48.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502314 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-502314 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.913775393s)
--- PASS: TestNoKubernetes/serial/Start (50.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-473452
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-473452: (1.179390499s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (97.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-352397 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-352397 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m37.955525511s)
--- PASS: TestPause/serial/Start (97.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21974-7091/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-502314 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-502314 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.131846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-502314
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-502314: (1.347793278s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-502314 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-502314 --driver=kvm2  --container-runtime=crio: (47.771498399s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.77s)

                                                
                                    
x
+
TestISOImage/Setup (30.7s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-198710 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-198710 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.698150418s)
--- PASS: TestISOImage/Setup (30.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-502314 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-502314 "sudo systemctl is-active --quiet service kubelet": exit status 1 (172.610393ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-352397 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-352397 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.268102345s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (50.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-352397 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-352397 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-352397 --output=json --layout=cluster: exit status 2 (230.871027ms)

                                                
                                                
-- stdout --
	{"Name":"pause-352397","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-352397","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.23s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-352397 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-352397 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.9s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-352397 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m34.699688682s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1126 20:35:48.493128   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m31.594121657s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-309253 "pgrep -a kubelet"
I1126 20:36:49.430445   11003 config.go:182] Loaded profile config "auto-309253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-309253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-994t7" [69c9f298-158d-420f-b2b0-be7e2f039eee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-994t7" [69c9f298-158d-420f-b2b0-be7e2f039eee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004775405s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-309253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-n2vgg" [07596bce-9b0e-43ad-81ec-9ac3194d4ad6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004896082s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m13.829239974s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-309253 "pgrep -a kubelet"
I1126 20:37:15.560430   11003 config.go:182] Loaded profile config "kindnet-309253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-309253 replace --force -f testdata/netcat-deployment.yaml
I1126 20:37:15.851103   11003 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zw2dz" [36777ca0-67ca-46d7-bee3-228cbf7ae970] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1126 20:37:21.020810   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zw2dz" [36777ca0-67ca-46d7-bee3-228cbf7ae970] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00625831s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m25.441019052s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-309253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (117.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m57.207982712s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (117.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-2z2px" [80968683-612a-4588-a192-0eadad36a0a2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-2z2px" [80968683-612a-4588-a192-0eadad36a0a2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005171969s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m15.046755432s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-309253 "pgrep -a kubelet"
I1126 20:38:35.214969   11003 config.go:182] Loaded profile config "calico-309253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-309253 replace --force -f testdata/netcat-deployment.yaml
I1126 20:38:35.485847   11003 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2rpw4" [43c2b9da-1287-46ec-9126-c8512cedb50a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2rpw4" [43c2b9da-1287-46ec-9126-c8512cedb50a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.007811611s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-309253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-309253 "pgrep -a kubelet"
I1126 20:38:48.554532   11003 config.go:182] Loaded profile config "custom-flannel-309253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-309253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nq6rj" [612e4157-89ec-4ae6-9aec-08966122f537] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nq6rj" [612e4157-89ec-4ae6-9aec-08966122f537] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005180342s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-309253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-309253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m29.719881364s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (69.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-431698 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-431698 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m9.609184192s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (69.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-309253 "pgrep -a kubelet"
I1126 20:39:41.657979   11003 config.go:182] Loaded profile config "enable-default-cni-309253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-309253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cj7qp" [ee582950-543e-45ad-9e8d-8c4ee0d535a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cj7qp" [ee582950-543e-45ad-9e8d-8c4ee0d535a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004817708s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zvpgl" [84f9d482-ad37-4ac0-8efe-9d78cbf12fff] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00590135s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-309253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-309253 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-309253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xv4wt" [8e227bfe-9b75-45cc-8e87-752ea0383971] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xv4wt" [8e227bfe-9b75-45cc-8e87-752ea0383971] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.003958688s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-309253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-466828 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-466828 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m14.625009616s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-476016 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-476016 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m31.608568918s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-431698 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2f5b04e0-bc92-4082-9d7d-57a89644f159] Pending
helpers_test.go:352: "busybox" [2f5b04e0-bc92-4082-9d7d-57a89644f159] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2f5b04e0-bc92-4082-9d7d-57a89644f159] Running
E1126 20:40:31.563115   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/functional-110910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005382711s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-431698 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-309253 "pgrep -a kubelet"
I1126 20:40:34.426751   11003 config.go:182] Loaded profile config "bridge-309253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-309253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-trtrd" [c5ee0230-28b6-4591-8607-fbe65a17ec10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-trtrd" [c5ee0230-28b6-4591-8607-fbe65a17ec10] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005480463s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-431698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-431698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.276789763s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-431698 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (81.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-431698 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-431698 --alsologtostderr -v=3: (1m21.687618889s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (81.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-309253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-309253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-464382 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-464382 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (54.926504742s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-466828 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0a826367-1b6a-4002-b44e-1d2f6a00c806] Pending
helpers_test.go:352: "busybox" [0a826367-1b6a-4002-b44e-1d2f6a00c806] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0a826367-1b6a-4002-b44e-1d2f6a00c806] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004383264s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-466828 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-466828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-466828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.112000608s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-466828 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (72.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-466828 --alsologtostderr -v=3
E1126 20:41:49.682217   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:49.688625   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:49.700003   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:49.721403   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:49.762832   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:49.844401   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:50.005976   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:50.328260   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:50.969843   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:41:52.251245   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-466828 --alsologtostderr -v=3: (1m12.737605937s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (72.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-476016 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fb5e0350-7712-402f-8f0a-c9d18d486a40] Pending
E1126 20:41:54.813026   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [fb5e0350-7712-402f-8f0a-c9d18d486a40] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fb5e0350-7712-402f-8f0a-c9d18d486a40] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003648218s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-476016 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-464382 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b1388b7d-7102-4b24-8ec6-769325df9452] Pending
helpers_test.go:352: "busybox" [b1388b7d-7102-4b24-8ec6-769325df9452] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1126 20:41:59.935181   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b1388b7d-7102-4b24-8ec6-769325df9452] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00389585s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-464382 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431698 -n old-k8s-version-431698
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431698 -n old-k8s-version-431698: exit status 7 (60.422706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-431698 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-431698 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-431698 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (47.487311306s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-431698 -n old-k8s-version-431698
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-476016 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-476016 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (83.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-476016 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-476016 --alsologtostderr -v=3: (1m23.702329167s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (83.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-464382 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-464382 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (89.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-464382 --alsologtostderr -v=3
E1126 20:42:09.347509   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:09.353976   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:09.365488   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:09.387061   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:09.428679   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:09.510209   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:09.671684   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:09.993839   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:10.177491   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:10.635372   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:11.917035   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:14.478719   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:19.600245   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:21.020751   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:29.842276   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:42:30.659224   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-464382 --alsologtostderr -v=3: (1m29.530748335s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (89.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-466828 -n no-preload-466828
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-466828 -n no-preload-466828: exit status 7 (64.347345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-466828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-466828 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-466828 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (55.485935504s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-466828 -n no-preload-466828
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h875w" [fec01661-47a9-419b-ba9e-f4389141e578] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h875w" [fec01661-47a9-419b-ba9e-f4389141e578] Running
E1126 20:42:50.324442   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005410134s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h875w" [fec01661-47a9-419b-ba9e-f4389141e578] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004769109s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-431698 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-431698 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-431698 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431698 -n old-k8s-version-431698
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431698 -n old-k8s-version-431698: exit status 2 (213.444845ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-431698 -n old-k8s-version-431698
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-431698 -n old-k8s-version-431698: exit status 2 (207.783942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-431698 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-431698 -n old-k8s-version-431698
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-431698 -n old-k8s-version-431698
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-527380 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1126 20:43:11.620679   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/auto-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-527380 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (47.073776466s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-476016 -n embed-certs-476016
E1126 20:43:29.006142   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:29.012611   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:29.024226   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:29.045832   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-476016 -n embed-certs-476016: exit status 7 (79.85082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-476016 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1126 20:43:29.088100   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:29.170185   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-476016 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1126 20:43:29.332421   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:29.654030   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:30.295919   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:31.286005   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/kindnet-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:31.578006   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:34.139955   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-476016 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (49.329465892s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-476016 -n embed-certs-476016
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382: exit status 7 (77.899902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-464382 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (81.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-464382 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1126 20:43:39.261865   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-464382 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.410634404s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (81.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9rpkn" [87f6557e-4238-47a6-bedb-33cd33dad8dc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1126 20:43:48.767988   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:48.774443   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:48.785882   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:48.807564   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:48.849184   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:48.930583   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9rpkn" [87f6557e-4238-47a6-bedb-33cd33dad8dc] Running
E1126 20:43:49.092783   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:49.414512   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:49.503652   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:50.056385   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:43:51.338503   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005997685s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-527380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-527380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.961328644s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-527380 --alsologtostderr -v=3
E1126 20:43:53.899886   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-527380 --alsologtostderr -v=3: (11.212372765s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9rpkn" [87f6557e-4238-47a6-bedb-33cd33dad8dc] Running
E1126 20:43:59.022386   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005004894s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-466828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-466828 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-466828 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-466828 -n no-preload-466828
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-466828 -n no-preload-466828: exit status 2 (275.339791ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-466828 -n no-preload-466828
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-466828 -n no-preload-466828: exit status 2 (275.343429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-466828 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-466828 --alsologtostderr -v=1: (1.022143314s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-466828 -n no-preload-466828
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-466828 -n no-preload-466828
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-527380 -n newest-cni-527380
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-527380 -n newest-cni-527380: exit status 7 (79.193813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-527380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (42.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-527380 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-527380 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (41.960231594s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-527380 -n newest-cni-527380
E1126 20:44:46.994220   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:44:47.041762   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/enable-default-cni-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (42.31s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-198710 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)
E1126 20:44:09.263751   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:44:09.985341   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/calico-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5x7ww" [3816fe7f-0971-4fe8-9b6e-af3f7797fe89] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02229126s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5x7ww" [3816fe7f-0971-4fe8-9b6e-af3f7797fe89] Running
E1126 20:44:29.745778   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006241239s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-476016 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-476016 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-476016 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-476016 --alsologtostderr -v=1: (1.075646552s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-476016 -n embed-certs-476016
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-476016 -n embed-certs-476016: exit status 2 (277.533936ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-476016 -n embed-certs-476016
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-476016 -n embed-certs-476016: exit status 2 (243.17103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-476016 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-476016 -n embed-certs-476016
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-476016 -n embed-certs-476016
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-527380 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-527380 --alsologtostderr -v=1
E1126 20:44:47.636538   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-527380 --alsologtostderr -v=1: (1.075650174s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-527380 -n newest-cni-527380
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-527380 -n newest-cni-527380: exit status 2 (328.67062ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-527380 -n newest-cni-527380
E1126 20:44:48.918077   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-527380 -n newest-cni-527380: exit status 2 (298.218935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-527380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-527380 -n newest-cni-527380
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-527380 -n newest-cni-527380
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zr9fz" [6096c735-2e11-46f9-8183-7504777559aa] Running
E1126 20:45:02.405858   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/enable-default-cni-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003972244s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zr9fz" [6096c735-2e11-46f9-8183-7504777559aa] Running
E1126 20:45:06.843609   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003901196s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-464382 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-464382 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-464382 --alsologtostderr -v=1
E1126 20:45:10.707711   11003 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/custom-flannel-309253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382: exit status 2 (216.10645ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382: exit status 2 (224.585104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-464382 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-464382 -n default-k8s-diff-port-464382
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.54s)

                                                
                                    

Test skip (40/351)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.39
267 TestNetworkPlugins/group/cilium 3.64
278 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198878 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-309253 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-309253" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-309253

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-309253"

                                                
                                                
----------------------- debugLogs end: kubenet-309253 [took: 3.225328671s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-309253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-309253
--- SKIP: TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-309253 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-309253" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-309253

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-309253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-309253"

                                                
                                                
----------------------- debugLogs end: cilium-309253 [took: 3.463022256s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-309253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-309253
--- SKIP: TestNetworkPlugins/group/cilium (3.64s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-521535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-521535
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard