Test Report: KVM_Linux_crio 21968

                    
                      c47dc458d63a230593369798adacaa3ab200078c:2025-11-23:42467
                    
                

Test fail (2/351)

Order failed test Duration
37 TestAddons/parallel/Ingress 160.49
244 TestPreload 175.47
x
+
TestAddons/parallel/Ingress (160.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-894046 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-894046 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-894046 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [03bc90d8-f0b1-44e9-84f8-0d66efc32c7b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [03bc90d8-f0b1-44e9-84f8-0d66efc32c7b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.006759326s
I1123 09:25:10.067460    7590 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-894046 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.199445149s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-894046 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.58
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-894046 -n addons-894046
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 logs -n 25: (1.15844614s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-257281                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-257281 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │ 23 Nov 25 09:20 UTC │
	│ start   │ --download-only -p binary-mirror-687799 --alsologtostderr --binary-mirror http://127.0.0.1:40155 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-687799 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │                     │
	│ delete  │ -p binary-mirror-687799                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-687799 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │ 23 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p addons-894046                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-894046                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │                     │
	│ start   │ -p addons-894046 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │ 23 Nov 25 09:24 UTC │
	│ addons  │ addons-894046 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:24 UTC │ 23 Nov 25 09:24 UTC │
	│ addons  │ addons-894046 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:24 UTC │ 23 Nov 25 09:24 UTC │
	│ addons  │ enable headlamp -p addons-894046 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:24 UTC │ 23 Nov 25 09:24 UTC │
	│ addons  │ addons-894046 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:24 UTC │ 23 Nov 25 09:24 UTC │
	│ addons  │ addons-894046 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:24 UTC │ 23 Nov 25 09:24 UTC │
	│ addons  │ addons-894046 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-894046 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-894046                                                                                                                                                                                                                                                                                                                                                                                         │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-894046 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ ssh     │ addons-894046 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │                     │
	│ ip      │ addons-894046 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-894046 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-894046 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-894046 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ ssh     │ addons-894046 ssh cat /opt/local-path-provisioner/pvc-8c9015e7-12ea-468b-b7fc-daa74eb34219_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:25 UTC │
	│ addons  │ addons-894046 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:25 UTC │ 23 Nov 25 09:26 UTC │
	│ addons  │ addons-894046 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:26 UTC │ 23 Nov 25 09:26 UTC │
	│ addons  │ addons-894046 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:26 UTC │ 23 Nov 25 09:26 UTC │
	│ ip      │ addons-894046 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-894046        │ jenkins │ v1.37.0 │ 23 Nov 25 09:27 UTC │ 23 Nov 25 09:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:20:59
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:20:59.155990    8324 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:20:59.156094    8324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:20:59.156104    8324 out.go:374] Setting ErrFile to fd 2...
	I1123 09:20:59.156112    8324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:20:59.156317    8324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 09:20:59.156825    8324 out.go:368] Setting JSON to false
	I1123 09:20:59.157641    8324 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":197,"bootTime":1763889462,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:20:59.157690    8324 start.go:143] virtualization: kvm guest
	I1123 09:20:59.159328    8324 out.go:179] * [addons-894046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:20:59.160573    8324 notify.go:221] Checking for updates...
	I1123 09:20:59.160598    8324 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:20:59.162111    8324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:20:59.163518    8324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 09:20:59.164644    8324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 09:20:59.165735    8324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:20:59.166849    8324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:20:59.168214    8324 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:20:59.197166    8324 out.go:179] * Using the kvm2 driver based on user configuration
	I1123 09:20:59.198124    8324 start.go:309] selected driver: kvm2
	I1123 09:20:59.198134    8324 start.go:927] validating driver "kvm2" against <nil>
	I1123 09:20:59.198143    8324 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:20:59.198801    8324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:20:59.199027    8324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:20:59.199059    8324 cni.go:84] Creating CNI manager for ""
	I1123 09:20:59.199100    8324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 09:20:59.199108    8324 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1123 09:20:59.199147    8324 start.go:353] cluster config:
	{Name:addons-894046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-894046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1123 09:20:59.199245    8324 iso.go:125] acquiring lock: {Name:mkda1f2156fa5a41237d44afe14c60be86e641cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:20:59.200618    8324 out.go:179] * Starting "addons-894046" primary control-plane node in "addons-894046" cluster
	I1123 09:20:59.201689    8324 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:20:59.201715    8324 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:20:59.201727    8324 cache.go:65] Caching tarball of preloaded images
	I1123 09:20:59.201788    8324 preload.go:238] Found /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:20:59.201804    8324 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:20:59.202089    8324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/config.json ...
	I1123 09:20:59.202109    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/config.json: {Name:mkdc420925c9eb1375b6ee9a9cd6ac84ff84a098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:20:59.202234    8324 start.go:360] acquireMachinesLock for addons-894046: {Name:mk3faa1cfbcacb62e9602286e0ef7afeec78d5f2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1123 09:20:59.202279    8324 start.go:364] duration metric: took 32.539µs to acquireMachinesLock for "addons-894046"
	I1123 09:20:59.202297    8324 start.go:93] Provisioning new machine with config: &{Name:addons-894046 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-894046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:20:59.202339    8324 start.go:125] createHost starting for "" (driver="kvm2")
	I1123 09:20:59.203972    8324 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1123 09:20:59.204115    8324 start.go:159] libmachine.API.Create for "addons-894046" (driver="kvm2")
	I1123 09:20:59.204144    8324 client.go:173] LocalClient.Create starting
	I1123 09:20:59.204223    8324 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem
	I1123 09:20:59.222929    8324 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/cert.pem
	I1123 09:20:59.365746    8324 main.go:143] libmachine: creating domain...
	I1123 09:20:59.365766    8324 main.go:143] libmachine: creating network...
	I1123 09:20:59.367188    8324 main.go:143] libmachine: found existing default network
	I1123 09:20:59.367434    8324 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1123 09:20:59.368046    8324 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015fdf50}
	I1123 09:20:59.368169    8324 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-894046</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1123 09:20:59.373981    8324 main.go:143] libmachine: creating private network mk-addons-894046 192.168.39.0/24...
	I1123 09:20:59.439274    8324 main.go:143] libmachine: private network mk-addons-894046 192.168.39.0/24 created
	I1123 09:20:59.439569    8324 main.go:143] libmachine: <network>
	  <name>mk-addons-894046</name>
	  <uuid>6a08246a-3282-4cee-97f6-9f9dd47a8fb1</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:4f:85:c9'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1123 09:20:59.439605    8324 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046 ...
	I1123 09:20:59.439639    8324 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21968-3638/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1123 09:20:59.439651    8324 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 09:20:59.439720    8324 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21968-3638/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21968-3638/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1123 09:20:59.711202    8324 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa...
	I1123 09:20:59.729708    8324 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/addons-894046.rawdisk...
	I1123 09:20:59.729743    8324 main.go:143] libmachine: Writing magic tar header
	I1123 09:20:59.729770    8324 main.go:143] libmachine: Writing SSH key tar header
	I1123 09:20:59.729855    8324 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046 ...
	I1123 09:20:59.729949    8324 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046
	I1123 09:20:59.729994    8324 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046 (perms=drwx------)
	I1123 09:20:59.730021    8324 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21968-3638/.minikube/machines
	I1123 09:20:59.730040    8324 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21968-3638/.minikube/machines (perms=drwxr-xr-x)
	I1123 09:20:59.730062    8324 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 09:20:59.730081    8324 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21968-3638/.minikube (perms=drwxr-xr-x)
	I1123 09:20:59.730095    8324 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21968-3638
	I1123 09:20:59.730109    8324 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21968-3638 (perms=drwxrwxr-x)
	I1123 09:20:59.730122    8324 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1123 09:20:59.730139    8324 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1123 09:20:59.730154    8324 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1123 09:20:59.730167    8324 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1123 09:20:59.730187    8324 main.go:143] libmachine: checking permissions on dir: /home
	I1123 09:20:59.730201    8324 main.go:143] libmachine: skipping /home - not owner
	I1123 09:20:59.730208    8324 main.go:143] libmachine: defining domain...
	I1123 09:20:59.731349    8324 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-894046</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/addons-894046.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-894046'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1123 09:20:59.738715    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:3b:94:70 in network default
	I1123 09:20:59.739290    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:20:59.739307    8324 main.go:143] libmachine: starting domain...
	I1123 09:20:59.739313    8324 main.go:143] libmachine: ensuring networks are active...
	I1123 09:20:59.739989    8324 main.go:143] libmachine: Ensuring network default is active
	I1123 09:20:59.740328    8324 main.go:143] libmachine: Ensuring network mk-addons-894046 is active
	I1123 09:20:59.741972    8324 main.go:143] libmachine: getting domain XML...
	I1123 09:20:59.742841    8324 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-894046</name>
	  <uuid>d90b3a70-0e18-440a-b578-2bbb6d15827a</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/addons-894046.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:10:3c:de'/>
	      <source network='mk-addons-894046'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:3b:94:70'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1123 09:21:01.005444    8324 main.go:143] libmachine: waiting for domain to start...
	I1123 09:21:01.006721    8324 main.go:143] libmachine: domain is now running
	I1123 09:21:01.006737    8324 main.go:143] libmachine: waiting for IP...
	I1123 09:21:01.007380    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:01.007879    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:01.007891    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:01.008141    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:01.008178    8324 retry.go:31] will retry after 228.751248ms: waiting for domain to come up
	I1123 09:21:01.238808    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:01.239547    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:01.239560    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:01.239849    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:01.239899    8324 retry.go:31] will retry after 277.907769ms: waiting for domain to come up
	I1123 09:21:01.519298    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:01.519981    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:01.519995    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:01.520309    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:01.520347    8324 retry.go:31] will retry after 480.041196ms: waiting for domain to come up
	I1123 09:21:02.002203    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:02.002867    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:02.002887    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:02.003279    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:02.003319    8324 retry.go:31] will retry after 529.48047ms: waiting for domain to come up
	I1123 09:21:02.533980    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:02.534630    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:02.534651    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:02.534907    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:02.534955    8324 retry.go:31] will retry after 470.918264ms: waiting for domain to come up
	I1123 09:21:03.007573    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:03.008208    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:03.008243    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:03.008496    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:03.008531    8324 retry.go:31] will retry after 636.151809ms: waiting for domain to come up
	I1123 09:21:03.646489    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:03.647216    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:03.647233    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:03.647624    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:03.647662    8324 retry.go:31] will retry after 749.266207ms: waiting for domain to come up
	I1123 09:21:04.398688    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:04.399458    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:04.399478    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:04.399767    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:04.399804    8324 retry.go:31] will retry after 1.097777636s: waiting for domain to come up
	I1123 09:21:05.499175    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:05.499835    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:05.499852    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:05.500209    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:05.500249    8324 retry.go:31] will retry after 1.225885839s: waiting for domain to come up
	I1123 09:21:06.727711    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:06.728421    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:06.728439    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:06.728726    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:06.728761    8324 retry.go:31] will retry after 2.111624867s: waiting for domain to come up
	I1123 09:21:08.843150    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:08.843930    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:08.843960    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:08.844284    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:08.844329    8324 retry.go:31] will retry after 2.850330733s: waiting for domain to come up
	I1123 09:21:11.698331    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:11.699029    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:11.699044    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:11.699305    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:11.699334    8324 retry.go:31] will retry after 2.534763411s: waiting for domain to come up
	I1123 09:21:14.236320    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:14.236982    8324 main.go:143] libmachine: no network interface addresses found for domain addons-894046 (source=lease)
	I1123 09:21:14.236997    8324 main.go:143] libmachine: trying to list again with source=arp
	I1123 09:21:14.237269    8324 main.go:143] libmachine: unable to find current IP address of domain addons-894046 in network mk-addons-894046 (interfaces detected: [])
	I1123 09:21:14.237300    8324 retry.go:31] will retry after 4.416216136s: waiting for domain to come up
	I1123 09:21:18.658707    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:18.659394    8324 main.go:143] libmachine: domain addons-894046 has current primary IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:18.659406    8324 main.go:143] libmachine: found domain IP: 192.168.39.58
	I1123 09:21:18.659412    8324 main.go:143] libmachine: reserving static IP address...
	I1123 09:21:18.659760    8324 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-894046", mac: "52:54:00:10:3c:de", ip: "192.168.39.58"} in network mk-addons-894046
	I1123 09:21:18.822141    8324 main.go:143] libmachine: reserved static IP address 192.168.39.58 for domain addons-894046
	I1123 09:21:18.822161    8324 main.go:143] libmachine: waiting for SSH...
	I1123 09:21:18.822167    8324 main.go:143] libmachine: Getting to WaitForSSH function...
	I1123 09:21:18.824668    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:18.825114    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:18.825144    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:18.825365    8324 main.go:143] libmachine: Using SSH client type: native
	I1123 09:21:18.825654    8324 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1123 09:21:18.825669    8324 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1123 09:21:18.933617    8324 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:21:18.934019    8324 main.go:143] libmachine: domain creation complete
	I1123 09:21:18.935567    8324 machine.go:94] provisionDockerMachine start ...
	I1123 09:21:18.937827    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:18.938188    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:18.938206    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:18.938362    8324 main.go:143] libmachine: Using SSH client type: native
	I1123 09:21:18.938548    8324 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1123 09:21:18.938565    8324 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:21:19.043716    8324 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1123 09:21:19.043746    8324 buildroot.go:166] provisioning hostname "addons-894046"
	I1123 09:21:19.046704    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.047133    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.047162    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.047368    8324 main.go:143] libmachine: Using SSH client type: native
	I1123 09:21:19.047644    8324 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1123 09:21:19.047659    8324 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-894046 && echo "addons-894046" | sudo tee /etc/hostname
	I1123 09:21:19.172142    8324 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-894046
	
	I1123 09:21:19.174822    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.175226    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.175246    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.175405    8324 main.go:143] libmachine: Using SSH client type: native
	I1123 09:21:19.175629    8324 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1123 09:21:19.175645    8324 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-894046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-894046/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-894046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:21:19.291183    8324 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:21:19.291212    8324 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3638/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3638/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3638/.minikube}
	I1123 09:21:19.291236    8324 buildroot.go:174] setting up certificates
	I1123 09:21:19.291252    8324 provision.go:84] configureAuth start
	I1123 09:21:19.294016    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.294389    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.294408    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.296436    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.296844    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.296866    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.296999    8324 provision.go:143] copyHostCerts
	I1123 09:21:19.297057    8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3638/.minikube/ca.pem (1078 bytes)
	I1123 09:21:19.297204    8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3638/.minikube/cert.pem (1123 bytes)
	I1123 09:21:19.297265    8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3638/.minikube/key.pem (1679 bytes)
	I1123 09:21:19.297310    8324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3638/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca-key.pem org=jenkins.addons-894046 san=[127.0.0.1 192.168.39.58 addons-894046 localhost minikube]
	I1123 09:21:19.402754    8324 provision.go:177] copyRemoteCerts
	I1123 09:21:19.402810    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:21:19.405335    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.405647    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.405666    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.405836    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:19.490004    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:21:19.519788    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:21:19.549663    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:21:19.578896    8324 provision.go:87] duration metric: took 287.63287ms to configureAuth
	I1123 09:21:19.578920    8324 buildroot.go:189] setting minikube options for container-runtime
	I1123 09:21:19.579112    8324 config.go:182] Loaded profile config "addons-894046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:21:19.581819    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.582176    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.582197    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.582359    8324 main.go:143] libmachine: Using SSH client type: native
	I1123 09:21:19.582593    8324 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1123 09:21:19.582608    8324 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:21:19.827322    8324 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:21:19.827355    8324 machine.go:97] duration metric: took 891.77037ms to provisionDockerMachine
	I1123 09:21:19.827369    8324 client.go:176] duration metric: took 20.623218501s to LocalClient.Create
	I1123 09:21:19.827394    8324 start.go:167] duration metric: took 20.623276533s to libmachine.API.Create "addons-894046"
	I1123 09:21:19.827427    8324 start.go:293] postStartSetup for "addons-894046" (driver="kvm2")
	I1123 09:21:19.827441    8324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:21:19.827527    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:21:19.830227    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.830612    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.830638    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.830784    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:19.915668    8324 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:21:19.920693    8324 info.go:137] Remote host: Buildroot 2025.02
	I1123 09:21:19.920714    8324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3638/.minikube/addons for local assets ...
	I1123 09:21:19.920788    8324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3638/.minikube/files for local assets ...
	I1123 09:21:19.920814    8324 start.go:296] duration metric: took 93.37976ms for postStartSetup
	I1123 09:21:19.923663    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.923998    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.924017    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.924228    8324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/config.json ...
	I1123 09:21:19.924392    8324 start.go:128] duration metric: took 20.722044851s to createHost
	I1123 09:21:19.926342    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.926690    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:19.926712    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:19.926846    8324 main.go:143] libmachine: Using SSH client type: native
	I1123 09:21:19.927101    8324 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1123 09:21:19.927115    8324 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1123 09:21:20.034555    8324 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763889679.997375115
	
	I1123 09:21:20.034580    8324 fix.go:216] guest clock: 1763889679.997375115
	I1123 09:21:20.034598    8324 fix.go:229] Guest: 2025-11-23 09:21:19.997375115 +0000 UTC Remote: 2025-11-23 09:21:19.924402387 +0000 UTC m=+20.813507648 (delta=72.972728ms)
	I1123 09:21:20.034613    8324 fix.go:200] guest clock delta is within tolerance: 72.972728ms
	I1123 09:21:20.034617    8324 start.go:83] releasing machines lock for "addons-894046", held for 20.832330394s
	I1123 09:21:20.037171    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:20.037564    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:20.037589    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:20.038081    8324 ssh_runner.go:195] Run: cat /version.json
	I1123 09:21:20.038171    8324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:21:20.041060    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:20.041105    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:20.041447    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:20.041505    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:20.041526    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:20.041579    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:20.041705    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:20.041914    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:20.118934    8324 ssh_runner.go:195] Run: systemctl --version
	I1123 09:21:20.150219    8324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:21:20.308227    8324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:21:20.315781    8324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:21:20.315847    8324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:21:20.336075    8324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:21:20.336095    8324 start.go:496] detecting cgroup driver to use...
	I1123 09:21:20.336163    8324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:21:20.356725    8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:21:20.374703    8324 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:21:20.374768    8324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:21:20.394814    8324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:21:20.411928    8324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:21:20.560010    8324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:21:20.767624    8324 docker.go:234] disabling docker service ...
	I1123 09:21:20.767687    8324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:21:20.784275    8324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:21:20.799446    8324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:21:20.959292    8324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:21:21.108491    8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:21:21.125340    8324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:21:21.149021    8324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:21:21.149135    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:21:21.162747    8324 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:21:21.162833    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:21:21.175562    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:21:21.188707    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:21:21.201258    8324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:21:21.214349    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:21:21.227111    8324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:21:21.247898    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:21:21.262860    8324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:21:21.274741    8324 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1123 09:21:21.274799    8324 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1123 09:21:21.296722    8324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:21:21.308571    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:21:21.452181    8324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:21:21.585695    8324 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:21:21.585802    8324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:21:21.591540    8324 start.go:564] Will wait 60s for crictl version
	I1123 09:21:21.591610    8324 ssh_runner.go:195] Run: which crictl
	I1123 09:21:21.595919    8324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1123 09:21:21.634348    8324 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1123 09:21:21.634472    8324 ssh_runner.go:195] Run: crio --version
	I1123 09:21:21.664657    8324 ssh_runner.go:195] Run: crio --version
	I1123 09:21:21.696431    8324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1123 09:21:21.699933    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:21.700288    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:21.700313    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:21.700514    8324 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1123 09:21:21.704907    8324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:21:21.720319    8324 kubeadm.go:884] updating cluster {Name:addons-894046 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-894046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:21:21.720417    8324 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:21:21.720454    8324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:21:21.750113    8324 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 09:21:21.750175    8324 ssh_runner.go:195] Run: which lz4
	I1123 09:21:21.754445    8324 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1123 09:21:21.759266    8324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1123 09:21:21.759287    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1123 09:21:23.264284    8324 crio.go:462] duration metric: took 1.509862997s to copy over tarball
	I1123 09:21:23.264370    8324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1123 09:21:24.921254    8324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.656842854s)
	I1123 09:21:24.921286    8324 crio.go:469] duration metric: took 1.656977551s to extract the tarball
	I1123 09:21:24.921295    8324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1123 09:21:24.963953    8324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:21:25.005563    8324 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:21:25.005587    8324 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:21:25.005595    8324 kubeadm.go:935] updating node { 192.168.39.58 8443 v1.34.1 crio true true} ...
	I1123 09:21:25.005667    8324 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-894046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-894046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:21:25.005736    8324 ssh_runner.go:195] Run: crio config
	I1123 09:21:25.055209    8324 cni.go:84] Creating CNI manager for ""
	I1123 09:21:25.055254    8324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 09:21:25.055273    8324 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:21:25.055297    8324 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-894046 NodeName:addons-894046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:21:25.055449    8324 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-894046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:21:25.055521    8324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:21:25.067821    8324 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:21:25.067882    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:21:25.079212    8324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1123 09:21:25.100561    8324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:21:25.121272    8324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1123 09:21:25.141300    8324 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1123 09:21:25.145630    8324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:21:25.160039    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:21:25.302705    8324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:21:25.336755    8324 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046 for IP: 192.168.39.58
	I1123 09:21:25.336778    8324 certs.go:195] generating shared ca certs ...
	I1123 09:21:25.336794    8324 certs.go:227] acquiring lock for ca certs: {Name:mkc236b2df9db5d23fb877d4ca5dc928e3eefed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.336982    8324 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3638/.minikube/ca.key
	I1123 09:21:25.410002    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt ...
	I1123 09:21:25.410030    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt: {Name:mk069fadf518af0ed152e3c7972ddca2cc6af520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.410185    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3638/.minikube/ca.key ...
	I1123 09:21:25.410196    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/ca.key: {Name:mk5cbb1738911f36db1d77829238baa86f1cd508 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.410277    8324 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.key
	I1123 09:21:25.438086    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.crt ...
	I1123 09:21:25.438109    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.crt: {Name:mk4357e0faf726b369025dfc590f9b9ce558bdcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.438254    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.key ...
	I1123 09:21:25.438264    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.key: {Name:mk89380919346b0efca66ce03fb357f09a74242c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.438331    8324 certs.go:257] generating profile certs ...
	I1123 09:21:25.438380    8324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.key
	I1123 09:21:25.438393    8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt with IP's: []
	I1123 09:21:25.465611    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt ...
	I1123 09:21:25.465636    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: {Name:mk63689d106a80bd2541fa21e604e33bd98b8a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.465770    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.key ...
	I1123 09:21:25.465780    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.key: {Name:mk843f55f760119bf856fcc2c874cf9a543af950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.466332    8324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.key.3e94c9d8
	I1123 09:21:25.466353    8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.crt.3e94c9d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.58]
	I1123 09:21:25.536711    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.crt.3e94c9d8 ...
	I1123 09:21:25.536739    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.crt.3e94c9d8: {Name:mka52eedf114f943215b76b1314a5f983105a48d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.536881    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.key.3e94c9d8 ...
	I1123 09:21:25.536893    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.key.3e94c9d8: {Name:mke1fc404545ec3a87d8c5f814d2fcabe24b2350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.536973    8324 certs.go:382] copying /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.crt.3e94c9d8 -> /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.crt
	I1123 09:21:25.537042    8324 certs.go:386] copying /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.key.3e94c9d8 -> /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.key
	I1123 09:21:25.537092    8324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/proxy-client.key
	I1123 09:21:25.537110    8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/proxy-client.crt with IP's: []
	I1123 09:21:25.576918    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/proxy-client.crt ...
	I1123 09:21:25.576948    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/proxy-client.crt: {Name:mk28767e5c3007a77b91e326c9da6e3f0c113ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.577100    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/proxy-client.key ...
	I1123 09:21:25.577111    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/proxy-client.key: {Name:mk1904c32bd3075a04f67fcc9545e7558e79b31d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:25.577284    8324 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:21:25.577322    8324 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:21:25.577369    8324 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:21:25.577394    8324 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/key.pem (1679 bytes)
	I1123 09:21:25.577895    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:21:25.608983    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 09:21:25.639531    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:21:25.669393    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:21:25.697933    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 09:21:25.727755    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:21:25.756816    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:21:25.785798    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:21:25.815125    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:21:25.843901    8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:21:25.864608    8324 ssh_runner.go:195] Run: openssl version
	I1123 09:21:25.871331    8324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:21:25.884508    8324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:21:25.889677    8324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:21 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:21:25.889715    8324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:21:25.896975    8324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:21:25.909996    8324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:21:25.914813    8324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:21:25.914865    8324 kubeadm.go:401] StartCluster: {Name:addons-894046 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-894046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:21:25.914925    8324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:21:25.914988    8324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:21:25.951002    8324 cri.go:89] found id: ""
	I1123 09:21:25.951079    8324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:21:25.963466    8324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:21:25.975588    8324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:21:25.987473    8324 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:21:25.987497    8324 kubeadm.go:158] found existing configuration files:
	
	I1123 09:21:25.987543    8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:21:26.001230    8324 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:21:26.001295    8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:21:26.013873    8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:21:26.025902    8324 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:21:26.025978    8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:21:26.038317    8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:21:26.049390    8324 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:21:26.049452    8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:21:26.061332    8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:21:26.073506    8324 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:21:26.073572    8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:21:26.085572    8324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1123 09:21:26.136893    8324 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:21:26.136958    8324 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:21:26.231768    8324 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:21:26.231874    8324 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:21:26.232002    8324 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:21:26.243576    8324 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 09:21:26.245704    8324 out.go:252]   - Generating certificates and keys ...
	I1123 09:21:26.245774    8324 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:21:26.245852    8324 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:21:26.383566    8324 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:21:26.597266    8324 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:21:26.861096    8324 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:21:26.995459    8324 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 09:21:27.407005    8324 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:21:27.407140    8324 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-894046 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1123 09:21:27.842669    8324 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:21:27.842796    8324 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-894046 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1123 09:21:28.005804    8324 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:21:28.207166    8324 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:21:28.258015    8324 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:21:28.258092    8324 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:21:28.419319    8324 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:21:28.815005    8324 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:21:29.220328    8324 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:21:29.642808    8324 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:21:29.689309    8324 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:21:29.689425    8324 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:21:29.691455    8324 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:21:29.693137    8324 out.go:252]   - Booting up control plane ...
	I1123 09:21:29.693240    8324 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:21:29.693365    8324 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:21:29.694695    8324 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:21:29.713219    8324 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:21:29.713336    8324 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:21:29.720744    8324 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:21:29.721742    8324 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:21:29.722109    8324 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:21:29.894681    8324 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:21:29.894833    8324 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:21:30.895025    8324 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001380001s
	I1123 09:21:30.898019    8324 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:21:30.898144    8324 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.58:8443/livez
	I1123 09:21:30.898300    8324 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:21:30.898415    8324 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:21:34.262306    8324 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.366803011s
	I1123 09:21:35.983373    8324 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.089883628s
	I1123 09:21:37.895558    8324 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003395006s
	I1123 09:21:37.908271    8324 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:21:37.923570    8324 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:21:37.943913    8324 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:21:37.944195    8324 kubeadm.go:319] [mark-control-plane] Marking the node addons-894046 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:21:37.958947    8324 kubeadm.go:319] [bootstrap-token] Using token: t04c1o.ha8o24i0fnkf1qgm
	I1123 09:21:37.960291    8324 out.go:252]   - Configuring RBAC rules ...
	I1123 09:21:37.960434    8324 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:21:37.965652    8324 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:21:37.973062    8324 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:21:37.979033    8324 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:21:37.985769    8324 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:21:37.990125    8324 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:21:38.301695    8324 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:21:38.749264    8324 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:21:39.302010    8324 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:21:39.303708    8324 kubeadm.go:319] 
	I1123 09:21:39.303794    8324 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:21:39.303825    8324 kubeadm.go:319] 
	I1123 09:21:39.303929    8324 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:21:39.303956    8324 kubeadm.go:319] 
	I1123 09:21:39.303996    8324 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:21:39.304103    8324 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:21:39.304184    8324 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:21:39.304193    8324 kubeadm.go:319] 
	I1123 09:21:39.304273    8324 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:21:39.304281    8324 kubeadm.go:319] 
	I1123 09:21:39.304348    8324 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:21:39.304355    8324 kubeadm.go:319] 
	I1123 09:21:39.304437    8324 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:21:39.304540    8324 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:21:39.304644    8324 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:21:39.304654    8324 kubeadm.go:319] 
	I1123 09:21:39.304743    8324 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:21:39.304857    8324 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:21:39.304869    8324 kubeadm.go:319] 
	I1123 09:21:39.304933    8324 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t04c1o.ha8o24i0fnkf1qgm \
	I1123 09:21:39.305093    8324 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cab0eee9d02964e993f7a80d1ad73b5107181b958d20515376baf29d6d503ecb \
	I1123 09:21:39.305125    8324 kubeadm.go:319] 	--control-plane 
	I1123 09:21:39.305139    8324 kubeadm.go:319] 
	I1123 09:21:39.305249    8324 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:21:39.305259    8324 kubeadm.go:319] 
	I1123 09:21:39.305375    8324 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t04c1o.ha8o24i0fnkf1qgm \
	I1123 09:21:39.305531    8324 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cab0eee9d02964e993f7a80d1ad73b5107181b958d20515376baf29d6d503ecb 
	I1123 09:21:39.307256    8324 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:21:39.307279    8324 cni.go:84] Creating CNI manager for ""
	I1123 09:21:39.307288    8324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 09:21:39.308981    8324 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1123 09:21:39.310112    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1123 09:21:39.323401    8324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1123 09:21:39.348985    8324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:21:39.349062    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:39.349084    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-894046 minikube.k8s.io/updated_at=2025_11_23T09_21_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=addons-894046 minikube.k8s.io/primary=true
	I1123 09:21:39.512354    8324 ops.go:34] apiserver oom_adj: -16
	I1123 09:21:39.514930    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:40.015814    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:40.515496    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:41.015770    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:41.515769    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:42.015221    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:42.515246    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:43.015144    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:21:43.099179    8324 kubeadm.go:1114] duration metric: took 3.750179885s to wait for elevateKubeSystemPrivileges
	I1123 09:21:43.099216    8324 kubeadm.go:403] duration metric: took 17.184352623s to StartCluster
	I1123 09:21:43.099248    8324 settings.go:142] acquiring lock: {Name:mkda898dc919f319fca5c9c62e0026647031093a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:43.099396    8324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 09:21:43.099798    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/kubeconfig: {Name:mk064b50b49499ad2e4fbd86fe10fb95b12274a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:21:43.100011    8324 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:21:43.100075    8324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:21:43.100131    8324 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 09:21:43.100272    8324 addons.go:70] Setting yakd=true in profile "addons-894046"
	I1123 09:21:43.100292    8324 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-894046"
	I1123 09:21:43.100307    8324 addons.go:70] Setting ingress=true in profile "addons-894046"
	I1123 09:21:43.100310    8324 addons.go:70] Setting inspektor-gadget=true in profile "addons-894046"
	I1123 09:21:43.100321    8324 addons.go:239] Setting addon ingress=true in "addons-894046"
	I1123 09:21:43.100332    8324 addons.go:70] Setting storage-provisioner=true in profile "addons-894046"
	I1123 09:21:43.100319    8324 addons.go:70] Setting gcp-auth=true in profile "addons-894046"
	I1123 09:21:43.100337    8324 addons.go:70] Setting default-storageclass=true in profile "addons-894046"
	I1123 09:21:43.100346    8324 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-894046"
	I1123 09:21:43.100351    8324 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-894046"
	I1123 09:21:43.100362    8324 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-894046"
	I1123 09:21:43.100364    8324 addons.go:70] Setting volcano=true in profile "addons-894046"
	I1123 09:21:43.100366    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100323    8324 addons.go:239] Setting addon inspektor-gadget=true in "addons-894046"
	I1123 09:21:43.100373    8324 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-894046"
	I1123 09:21:43.100376    8324 addons.go:239] Setting addon volcano=true in "addons-894046"
	I1123 09:21:43.100375    8324 addons.go:70] Setting volumesnapshots=true in profile "addons-894046"
	I1123 09:21:43.100386    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100390    8324 addons.go:239] Setting addon volumesnapshots=true in "addons-894046"
	I1123 09:21:43.100410    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100416    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100639    8324 addons.go:70] Setting cloud-spanner=true in profile "addons-894046"
	I1123 09:21:43.100682    8324 addons.go:239] Setting addon cloud-spanner=true in "addons-894046"
	I1123 09:21:43.100715    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100362    8324 addons.go:70] Setting registry=true in profile "addons-894046"
	I1123 09:21:43.101078    8324 addons.go:239] Setting addon registry=true in "addons-894046"
	I1123 09:21:43.101108    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100343    8324 addons.go:239] Setting addon storage-provisioner=true in "addons-894046"
	I1123 09:21:43.101276    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100323    8324 addons.go:70] Setting registry-creds=true in profile "addons-894046"
	I1123 09:21:43.101392    8324 addons.go:239] Setting addon registry-creds=true in "addons-894046"
	I1123 09:21:43.101412    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100285    8324 addons.go:70] Setting ingress-dns=true in profile "addons-894046"
	I1123 09:21:43.100321    8324 config.go:182] Loaded profile config "addons-894046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:21:43.101460    8324 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-894046"
	I1123 09:21:43.101477    8324 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-894046"
	I1123 09:21:43.101444    8324 addons.go:239] Setting addon ingress-dns=true in "addons-894046"
	I1123 09:21:43.101498    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.101509    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100366    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100352    8324 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-894046"
	I1123 09:21:43.101718    8324 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-894046"
	I1123 09:21:43.101754    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100355    8324 mustload.go:66] Loading cluster: addons-894046
	I1123 09:21:43.100343    8324 addons.go:70] Setting metrics-server=true in profile "addons-894046"
	I1123 09:21:43.102236    8324 addons.go:239] Setting addon metrics-server=true in "addons-894046"
	I1123 09:21:43.102276    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.100297    8324 addons.go:239] Setting addon yakd=true in "addons-894046"
	I1123 09:21:43.102348    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.102278    8324 out.go:179] * Verifying Kubernetes components...
	I1123 09:21:43.102189    8324 config.go:182] Loaded profile config "addons-894046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:21:43.103968    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1123 09:21:43.108087    8324 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 09:21:43.108330    8324 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 09:21:43.108339    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 09:21:43.108749    8324 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-894046"
	I1123 09:21:43.108764    8324 addons.go:239] Setting addon default-storageclass=true in "addons-894046"
	I1123 09:21:43.108780    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.108794    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.109483    8324 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 09:21:43.109519    8324 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 09:21:43.109564    8324 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 09:21:43.109587    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 09:21:43.110342    8324 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 09:21:43.110350    8324 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 09:21:43.110342    8324 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 09:21:43.110342    8324 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:21:43.110373    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 09:21:43.110372    8324 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 09:21:43.111158    8324 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 09:21:43.112037    8324 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 09:21:43.112335    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 09:21:43.111456    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:43.111159    8324 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 09:21:43.112667    8324 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:21:43.113067    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:21:43.113419    8324 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 09:21:43.113427    8324 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 09:21:43.112921    8324 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:21:43.114230    8324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:21:43.113439    8324 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 09:21:43.114317    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 09:21:43.112719    8324 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 09:21:43.114399    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 09:21:43.113440    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 09:21:43.113449    8324 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 09:21:43.114558    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 09:21:43.114559    8324 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 09:21:43.114603    8324 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 09:21:43.114570    8324 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 09:21:43.114932    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 09:21:43.113545    8324 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 09:21:43.115327    8324 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 09:21:43.115343    8324 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 09:21:43.116016    8324 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 09:21:43.116063    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 09:21:43.116070    8324 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 09:21:43.116302    8324 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 09:21:43.117264    8324 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:21:43.117303    8324 out.go:179]   - Using image docker.io/busybox:stable
	I1123 09:21:43.118043    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 09:21:43.118053    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.118872    8324 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 09:21:43.119205    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 09:21:43.118992    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.120163    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.120194    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.120513    8324 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:21:43.120902    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.120934    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.121048    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.121291    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 09:21:43.121916    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.122377    8324 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 09:21:43.122394    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 09:21:43.123117    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.124227    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 09:21:43.124548    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.124567    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.124581    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.125214    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.125529    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.126230    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.126261    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.126355    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.126601    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.126682    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.127058    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.127179    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.127210    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.127660    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.127677    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.127665    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.127713    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.128072    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.128099    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.128145    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.128174    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.128346    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.128440    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.128665    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.128712    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.128904    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.128984    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.129017    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.129043    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.129476    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.129489    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.129506    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.129728    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.129755    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.129812    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.130051    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.130087    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.130084    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.130218    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.130405    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.130418    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 09:21:43.130815    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.130847    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.131008    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.131425    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.131829    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.131851    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.132044    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:43.132881    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 09:21:43.134297    8324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 09:21:43.135368    8324 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 09:21:43.135388    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 09:21:43.137837    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.138212    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:43.138244    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:43.138364    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	W1123 09:21:43.257832    8324 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44684->192.168.39.58:22: read: connection reset by peer
	I1123 09:21:43.257866    8324 retry.go:31] will retry after 345.330181ms: ssh: handshake failed: read tcp 192.168.39.1:44684->192.168.39.58:22: read: connection reset by peer
	W1123 09:21:43.259665    8324 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44702->192.168.39.58:22: read: connection reset by peer
	I1123 09:21:43.259683    8324 retry.go:31] will retry after 311.567069ms: ssh: handshake failed: read tcp 192.168.39.1:44702->192.168.39.58:22: read: connection reset by peer
	W1123 09:21:43.259750    8324 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44718->192.168.39.58:22: read: connection reset by peer
	I1123 09:21:43.259759    8324 retry.go:31] will retry after 307.316733ms: ssh: handshake failed: read tcp 192.168.39.1:44718->192.168.39.58:22: read: connection reset by peer
	W1123 09:21:43.274376    8324 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44720->192.168.39.58:22: read: connection reset by peer
	I1123 09:21:43.274395    8324 retry.go:31] will retry after 215.476522ms: ssh: handshake failed: read tcp 192.168.39.1:44720->192.168.39.58:22: read: connection reset by peer
	I1123 09:21:43.580866    8324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:21:43.580888    8324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:21:43.641789    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:21:43.678887    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 09:21:43.679173    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 09:21:43.774266    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 09:21:43.782087    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 09:21:43.783508    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:21:43.810651    8324 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 09:21:43.810681    8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 09:21:43.823029    8324 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 09:21:43.823055    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 09:21:43.885299    8324 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 09:21:43.885328    8324 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 09:21:44.036562    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 09:21:44.118574    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 09:21:44.333816    8324 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 09:21:44.333844    8324 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 09:21:44.338280    8324 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 09:21:44.338298    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 09:21:44.426829    8324 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 09:21:44.426859    8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 09:21:44.458372    8324 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 09:21:44.458404    8324 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 09:21:44.502915    8324 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 09:21:44.502963    8324 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 09:21:44.578726    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 09:21:44.711876    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 09:21:45.014597    8324 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 09:21:45.014623    8324 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 09:21:45.041345    8324 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:21:45.041368    8324 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 09:21:45.079871    8324 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 09:21:45.079897    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 09:21:45.229353    8324 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 09:21:45.229379    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 09:21:45.256130    8324 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 09:21:45.256156    8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 09:21:45.795170    8324 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 09:21:45.795202    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 09:21:45.814501    8324 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 09:21:45.814519    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 09:21:45.865312    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 09:21:45.892137    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 09:21:46.002232    8324 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 09:21:46.002267    8324 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 09:21:46.200082    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 09:21:46.275789    8324 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 09:21:46.275828    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 09:21:46.470236    8324 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:21:46.470264    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 09:21:46.731775    8324 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 09:21:46.731806    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 09:21:46.999905    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:21:47.095544    8324 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 09:21:47.095572    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 09:21:47.617336    8324 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 09:21:47.617360    8324 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 09:21:47.680107    8324 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.099187704s)
	I1123 09:21:47.680140    8324 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1123 09:21:47.680151    8324 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.099245599s)
	I1123 09:21:47.680242    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.038414006s)
	I1123 09:21:47.680285    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.00137092s)
	I1123 09:21:47.681127    8324 node_ready.go:35] waiting up to 6m0s for node "addons-894046" to be "Ready" ...
	I1123 09:21:47.686785    8324 node_ready.go:49] node "addons-894046" is "Ready"
	I1123 09:21:47.686801    8324 node_ready.go:38] duration metric: took 5.649469ms for node "addons-894046" to be "Ready" ...
	I1123 09:21:47.686811    8324 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:21:47.686852    8324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:21:47.943807    8324 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 09:21:47.943827    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 09:21:48.185151    8324 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 09:21:48.185176    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 09:21:48.206922    8324 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-894046" context rescaled to 1 replicas
	I1123 09:21:48.565890    8324 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 09:21:48.565957    8324 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 09:21:49.028151    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 09:21:50.280024    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.505721147s)
	I1123 09:21:50.280156    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.498026621s)
	I1123 09:21:50.280237    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.496701195s)
	I1123 09:21:50.294510    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.615301001s)
	I1123 09:21:50.588088    8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 09:21:50.590849    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:50.591271    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:50.591335    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:50.591552    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:51.009880    8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 09:21:51.047540    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.010943973s)
	I1123 09:21:51.047630    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.929025702s)
	I1123 09:21:51.047694    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.468945038s)
	I1123 09:21:51.179162    8324 addons.go:239] Setting addon gcp-auth=true in "addons-894046"
	I1123 09:21:51.179225    8324 host.go:66] Checking if "addons-894046" exists ...
	I1123 09:21:51.181224    8324 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 09:21:51.183608    8324 main.go:143] libmachine: domain addons-894046 has defined MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:51.184013    8324 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:3c:de", ip: ""} in network mk-addons-894046: {Iface:virbr1 ExpiryTime:2025-11-23 10:21:15 +0000 UTC Type:0 Mac:52:54:00:10:3c:de Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-894046 Clientid:01:52:54:00:10:3c:de}
	I1123 09:21:51.184036    8324 main.go:143] libmachine: domain addons-894046 has defined IP address 192.168.39.58 and MAC address 52:54:00:10:3c:de in network mk-addons-894046
	I1123 09:21:51.184200    8324 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/addons-894046/id_rsa Username:docker}
	I1123 09:21:52.567928    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.856004026s)
	I1123 09:21:52.567997    8324 addons.go:495] Verifying addon ingress=true in "addons-894046"
	I1123 09:21:52.568006    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.702657066s)
	I1123 09:21:52.568032    8324 addons.go:495] Verifying addon registry=true in "addons-894046"
	I1123 09:21:52.568170    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.368052181s)
	I1123 09:21:52.568113    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.675942731s)
	I1123 09:21:52.568247    8324 addons.go:495] Verifying addon metrics-server=true in "addons-894046"
	I1123 09:21:52.568324    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.568377465s)
	W1123 09:21:52.569340    8324 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 09:21:52.569367    8324 retry.go:31] will retry after 190.912316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 09:21:52.568337    8324 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.881473089s)
	I1123 09:21:52.569420    8324 api_server.go:72] duration metric: took 9.469379692s to wait for apiserver process to appear ...
	I1123 09:21:52.569436    8324 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:21:52.569459    8324 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1123 09:21:52.569674    8324 out.go:179] * Verifying ingress addon...
	I1123 09:21:52.569673    8324 out.go:179] * Verifying registry addon...
	I1123 09:21:52.570563    8324 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-894046 service yakd-dashboard -n yakd-dashboard
	
	I1123 09:21:52.572144    8324 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 09:21:52.572152    8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 09:21:52.593648    8324 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1123 09:21:52.606852    8324 api_server.go:141] control plane version: v1.34.1
	I1123 09:21:52.606878    8324 api_server.go:131] duration metric: took 37.435353ms to wait for apiserver health ...
	I1123 09:21:52.606886    8324 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:21:52.641956    8324 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 09:21:52.641975    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:52.693736    8324 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 09:21:52.693757    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:52.693816    8324 system_pods.go:59] 15 kube-system pods found
	I1123 09:21:52.693864    8324 system_pods.go:61] "amd-gpu-device-plugin-27929" [b7f3807b-2e6b-442b-bab4-ddcdce9f67f7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:21:52.693879    8324 system_pods.go:61] "coredns-66bc5c9577-g79cp" [a880affd-d06f-4d71-843a-fc8b179267e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:21:52.693891    8324 system_pods.go:61] "coredns-66bc5c9577-wmj8q" [5347faf5-7f08-4d12-99fc-3ef5c6f51934] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:21:52.693895    8324 system_pods.go:61] "etcd-addons-894046" [939f69c7-c1b5-4566-a518-91529d130e65] Running
	I1123 09:21:52.693899    8324 system_pods.go:61] "kube-apiserver-addons-894046" [24146a26-8ab3-4d90-a0e4-2d388bf5b361] Running
	I1123 09:21:52.693903    8324 system_pods.go:61] "kube-controller-manager-addons-894046" [51714a4b-1231-4355-84ee-f31dff690b02] Running
	I1123 09:21:52.693910    8324 system_pods.go:61] "kube-ingress-dns-minikube" [308f85f4-d52f-40f0-9e7c-71a17b14ba0d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:21:52.693915    8324 system_pods.go:61] "kube-proxy-ssht6" [ade4a4e1-ba9b-42bd-9ccc-c47ff039ad74] Running
	I1123 09:21:52.693919    8324 system_pods.go:61] "kube-scheduler-addons-894046" [c3e6505b-dc42-4c01-a093-4d0ed17b4ce9] Running
	I1123 09:21:52.693925    8324 system_pods.go:61] "metrics-server-85b7d694d7-ngrml" [593c3ff1-a033-4c7c-add4-d09e6fb259d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:21:52.693960    8324 system_pods.go:61] "nvidia-device-plugin-daemonset-7qrmh" [493cdf00-f9b3-4b3e-8a0a-e8c7f74af685] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:21:52.693974    8324 system_pods.go:61] "registry-6b586f9694-sr8qg" [c79ec895-1d09-4632-859b-705ab6ff1179] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:21:52.693984    8324 system_pods.go:61] "registry-creds-764b6fb674-7t5sc" [693a05a3-8715-4418-b09c-4aa0c2a784fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:21:52.693995    8324 system_pods.go:61] "registry-proxy-qjx29" [f4a60b63-e27f-49b2-a26d-f03d7bff66cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:21:52.694003    8324 system_pods.go:61] "storage-provisioner" [7a977432-ac27-4f21-9bde-1f18c242b1b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:21:52.694015    8324 system_pods.go:74] duration metric: took 87.122287ms to wait for pod list to return data ...
	I1123 09:21:52.694027    8324 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:21:52.714581    8324 default_sa.go:45] found service account: "default"
	I1123 09:21:52.714612    8324 default_sa.go:55] duration metric: took 20.579863ms for default service account to be created ...
	I1123 09:21:52.714621    8324 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:21:52.760866    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 09:21:52.813140    8324 system_pods.go:86] 17 kube-system pods found
	I1123 09:21:52.813188    8324 system_pods.go:89] "amd-gpu-device-plugin-27929" [b7f3807b-2e6b-442b-bab4-ddcdce9f67f7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 09:21:52.813202    8324 system_pods.go:89] "coredns-66bc5c9577-g79cp" [a880affd-d06f-4d71-843a-fc8b179267e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:21:52.813216    8324 system_pods.go:89] "coredns-66bc5c9577-wmj8q" [5347faf5-7f08-4d12-99fc-3ef5c6f51934] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:21:52.813224    8324 system_pods.go:89] "etcd-addons-894046" [939f69c7-c1b5-4566-a518-91529d130e65] Running
	I1123 09:21:52.813233    8324 system_pods.go:89] "kube-apiserver-addons-894046" [24146a26-8ab3-4d90-a0e4-2d388bf5b361] Running
	I1123 09:21:52.813240    8324 system_pods.go:89] "kube-controller-manager-addons-894046" [51714a4b-1231-4355-84ee-f31dff690b02] Running
	I1123 09:21:52.813251    8324 system_pods.go:89] "kube-ingress-dns-minikube" [308f85f4-d52f-40f0-9e7c-71a17b14ba0d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 09:21:52.813258    8324 system_pods.go:89] "kube-proxy-ssht6" [ade4a4e1-ba9b-42bd-9ccc-c47ff039ad74] Running
	I1123 09:21:52.813265    8324 system_pods.go:89] "kube-scheduler-addons-894046" [c3e6505b-dc42-4c01-a093-4d0ed17b4ce9] Running
	I1123 09:21:52.813275    8324 system_pods.go:89] "metrics-server-85b7d694d7-ngrml" [593c3ff1-a033-4c7c-add4-d09e6fb259d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 09:21:52.813285    8324 system_pods.go:89] "nvidia-device-plugin-daemonset-7qrmh" [493cdf00-f9b3-4b3e-8a0a-e8c7f74af685] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 09:21:52.813297    8324 system_pods.go:89] "registry-6b586f9694-sr8qg" [c79ec895-1d09-4632-859b-705ab6ff1179] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 09:21:52.813311    8324 system_pods.go:89] "registry-creds-764b6fb674-7t5sc" [693a05a3-8715-4418-b09c-4aa0c2a784fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 09:21:52.813321    8324 system_pods.go:89] "registry-proxy-qjx29" [f4a60b63-e27f-49b2-a26d-f03d7bff66cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 09:21:52.813331    8324 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4lw9k" [b2d2cdae-0e4f-4f4c-b96a-b70ffd7c885c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 09:21:52.813339    8324 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8qbbs" [98d59d50-2339-4d8d-be3e-fa5150ef7d96] Pending
	I1123 09:21:52.813351    8324 system_pods.go:89] "storage-provisioner" [7a977432-ac27-4f21-9bde-1f18c242b1b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:21:52.813361    8324 system_pods.go:126] duration metric: took 98.733829ms to wait for k8s-apps to be running ...
	I1123 09:21:52.813376    8324 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:21:52.813439    8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:21:53.088588    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:53.088692    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:53.479453    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.451248421s)
	I1123 09:21:53.479490    8324 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-894046"
	I1123 09:21:53.479563    8324 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.298308376s)
	I1123 09:21:53.481284    8324 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 09:21:53.481301    8324 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 09:21:53.482513    8324 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 09:21:53.483169    8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 09:21:53.483737    8324 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 09:21:53.483758    8324 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 09:21:53.490002    8324 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 09:21:53.490017    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:53.581654    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:53.581664    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:53.632005    8324 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 09:21:53.632047    8324 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 09:21:53.741640    8324 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 09:21:53.741662    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 09:21:53.846780    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 09:21:54.013369    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:54.081353    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:54.082393    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:54.488291    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:54.589347    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:54.589612    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:54.708087    8324 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.894622048s)
	I1123 09:21:54.708118    8324 system_svc.go:56] duration metric: took 1.894741484s WaitForService to wait for kubelet
	I1123 09:21:54.708126    8324 kubeadm.go:587] duration metric: took 11.608092594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:21:54.708141    8324 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:21:54.708364    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.947451792s)
	I1123 09:21:54.713457    8324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1123 09:21:54.713476    8324 node_conditions.go:123] node cpu capacity is 2
	I1123 09:21:54.713490    8324 node_conditions.go:105] duration metric: took 5.344438ms to run NodePressure ...
	I1123 09:21:54.713501    8324 start.go:242] waiting for startup goroutines ...
	I1123 09:21:55.007032    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:55.109642    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:55.111144    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:55.176690    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.32986851s)
	I1123 09:21:55.177768    8324 addons.go:495] Verifying addon gcp-auth=true in "addons-894046"
	I1123 09:21:55.179906    8324 out.go:179] * Verifying gcp-auth addon...
	I1123 09:21:55.181897    8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 09:21:55.205064    8324 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 09:21:55.205080    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:55.491609    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:55.593297    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:55.595044    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:55.692310    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:55.995914    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:56.096307    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:56.100553    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:56.195198    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:56.490250    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:56.593632    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:56.593772    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:56.694101    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:56.993473    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:57.075660    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:57.075661    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:57.185854    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:57.487464    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:57.577025    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:57.577553    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:57.685629    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:57.987768    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:58.076872    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:58.077083    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:58.186338    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:58.488066    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:58.576061    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:58.578318    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:58.685429    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:58.987696    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:59.075708    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:59.078114    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:59.185423    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:59.488860    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:21:59.576627    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:21:59.578829    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:21:59.689644    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:21:59.988381    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:00.089548    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:00.089556    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:00.189303    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:00.491722    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:00.578550    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:00.578813    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:00.690353    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:00.987360    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:01.077290    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:01.078761    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:01.188394    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:01.489793    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:01.577755    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:01.577758    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:01.686299    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:01.987485    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:02.077096    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:02.077438    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:02.186456    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:02.488520    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:02.577594    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:02.579605    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:02.686594    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:02.989265    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:03.077328    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:03.080716    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:03.186270    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:03.487141    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:03.584302    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:03.586390    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:03.691031    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:03.986930    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:04.076276    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:04.076773    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:04.185435    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:04.486760    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:04.575954    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:04.576654    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:04.686233    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:04.987130    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:05.076791    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:05.076949    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:05.186051    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:05.487106    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:05.576022    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:05.576920    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:05.686048    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:05.988487    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:06.075963    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:06.076587    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:06.186019    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:06.488094    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:06.576867    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:06.577823    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:06.687038    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:06.986715    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:07.076014    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:07.076134    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:07.185288    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:07.487164    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:07.577992    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:07.578353    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:07.686101    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:07.986633    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:08.076665    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:08.077710    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:08.186057    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:08.486814    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:08.575975    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:08.576246    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:08.686066    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:08.987978    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:09.076582    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:09.077256    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:09.185813    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:09.487752    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:09.576863    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:09.577033    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:09.688453    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:09.989062    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:10.076703    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:10.077995    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:10.187988    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:10.487626    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:10.576101    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:10.577723    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:10.686219    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:10.988592    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:11.087681    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:11.087768    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:11.190719    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:11.490773    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:11.578044    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:11.579291    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:11.685694    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:11.987769    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:12.079563    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:12.080705    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:12.191491    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:12.492138    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:12.578757    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:12.579711    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:12.727074    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:12.999700    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:13.083008    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:13.083381    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:13.185904    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:13.488761    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:13.589461    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:13.589786    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:13.686506    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:13.987472    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:14.076110    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:14.076774    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:14.186758    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:14.489083    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:14.577007    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:14.577087    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:14.685218    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:14.987204    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:15.076365    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:15.076455    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:15.187314    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:15.487368    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:15.576882    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:15.577210    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:15.686313    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:15.986850    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:16.077036    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:16.077177    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:16.185984    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:16.487078    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:16.576679    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:16.577081    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:16.685966    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:16.989507    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:17.078712    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:17.078982    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:17.187568    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:17.487131    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:17.580103    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:17.582463    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:17.686711    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:17.987926    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:18.075983    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:18.076126    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:18.185799    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:18.488894    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:18.578232    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:18.578697    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:18.686062    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:18.987226    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:19.076064    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:19.077580    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:19.185818    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:19.488118    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:19.577552    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:19.578221    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:19.685511    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:19.987566    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:20.076212    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:20.076758    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:20.186158    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:20.487562    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:20.575872    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:20.576820    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:20.686128    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:20.986959    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:21.077669    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:21.077705    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:21.187147    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:21.490278    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:21.577355    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:21.580050    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:21.687066    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:21.989139    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:22.080611    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:22.080789    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:22.188086    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:22.488286    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:22.578693    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:22.578925    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:22.687586    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:22.986980    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:23.079593    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:23.081327    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:23.187818    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:23.492627    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:23.579108    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:23.580428    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:23.689596    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:23.987601    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:24.078352    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:24.080865    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:24.186837    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:24.491689    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:24.577164    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:24.578125    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:24.687131    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:24.990334    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:25.076838    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:25.077083    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:25.186793    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:25.488894    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:25.576520    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:25.577328    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:25.685967    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:25.988732    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:26.078447    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:26.078638    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:26.187497    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:26.584253    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:26.584429    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:26.584801    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:26.686334    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:27.056509    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:27.143706    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:27.144284    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:27.186492    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:27.489279    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:27.581115    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:27.586931    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:27.687094    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:27.987538    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:28.077015    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:28.077107    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:28.185501    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:28.488614    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:28.577715    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:28.580892    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:28.768269    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:28.988013    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:29.087394    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:29.088257    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:29.186919    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:29.490816    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:29.582757    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:29.583373    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:29.688287    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:29.989088    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:30.076863    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:30.085028    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:30.193738    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:30.496772    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:30.584603    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:30.587375    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:30.688402    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:30.987298    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:31.077497    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:31.078050    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:31.185656    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:31.487398    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:31.577518    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:31.577571    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:31.686361    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:31.987495    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:32.078305    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:32.079609    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:32.186037    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:32.486476    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:32.575752    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:32.575894    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:32.686964    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:32.989111    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:33.089325    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:33.089615    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:33.185786    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:33.487366    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:33.579479    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:33.579636    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:33.686588    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:33.987507    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:34.076721    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:34.077732    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:34.187594    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:34.490157    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:34.580188    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:34.581195    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:34.687639    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:34.990821    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:35.077503    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:35.080661    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:35.188622    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:35.489007    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:35.584383    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:35.584555    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:35.687353    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:35.986795    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:36.077720    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:36.078319    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:36.185693    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:36.489617    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:36.578327    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:36.579094    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:36.685766    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:37.190358    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:37.190522    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:37.191280    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:37.191630    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:37.488727    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:37.885139    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:37.885365    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:37.886008    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:37.989753    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:38.090103    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:38.090145    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:38.185766    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:38.487654    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:38.576281    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:38.577880    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:38.686447    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:38.987191    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:39.076357    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:39.077303    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:39.186336    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:39.487906    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:39.576076    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:39.576440    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:39.686030    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:39.986748    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:40.075954    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:40.076777    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:40.186852    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:40.487316    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:40.580364    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:40.581171    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:40.686169    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:40.988147    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:41.078202    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:41.078432    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:41.185874    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:41.488180    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:41.576396    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:41.576698    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:41.685335    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:41.987299    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:42.076777    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:42.076998    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:42.185469    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:42.487889    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:42.576209    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:42.578308    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:42.686401    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:42.987212    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:43.075645    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:43.076026    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:43.184961    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:43.486911    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:43.578087    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:43.578538    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:43.686090    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:43.987469    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:44.076457    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:44.077584    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:44.186307    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:44.493489    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:44.578641    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:44.579673    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:44.685427    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:44.992576    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:45.077785    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:45.079147    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:45.185736    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:45.493132    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:45.580919    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:45.583567    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:45.686855    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:45.988922    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:46.079144    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:46.079833    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:46.188954    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:46.487295    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:46.579636    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:46.589401    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:46.691015    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:46.986792    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:47.076027    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:47.077028    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:47.185033    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:47.487651    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:47.587337    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 09:22:47.587879    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:47.688168    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:47.987404    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:48.075923    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:48.076513    8324 kapi.go:107] duration metric: took 55.504357743s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 09:22:48.185626    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:48.487429    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:48.575761    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:48.687357    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:48.988034    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:49.076294    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:49.185531    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:49.490456    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:49.580000    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:49.685685    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:49.991161    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:50.078119    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:50.186777    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:50.488392    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:50.579630    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:50.687755    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:50.993128    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:51.079452    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:51.185584    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:51.493360    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:51.576207    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:51.685068    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:51.987641    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:52.079748    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:52.186452    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:52.488872    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:52.575708    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:52.685461    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:52.988492    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:53.088412    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:53.185722    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:53.488352    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:53.579178    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:53.685301    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:53.987802    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:54.076182    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:54.185125    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:54.486530    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:54.575912    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:54.686748    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:54.988000    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:55.076088    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:55.184928    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:55.487843    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:55.576285    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:55.685769    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:55.989178    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:56.078271    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:56.187990    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:56.490172    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:56.591792    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:56.692521    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:56.989273    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:57.079216    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:57.186759    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:57.493152    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:57.763854    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:57.765726    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:57.995164    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:58.093110    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:58.186035    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:58.495093    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:58.577574    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:58.686372    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:58.993891    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:59.077876    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:59.188560    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:59.491362    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:22:59.577141    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:22:59.685253    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:22:59.988200    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:00.077004    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:00.185114    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:00.492983    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:00.590668    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:00.690918    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:00.991204    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:01.084808    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:01.187304    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:01.488407    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:01.575887    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:01.688452    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:01.988578    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:02.075738    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:02.186900    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:02.489536    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:02.576272    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:02.687337    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:02.988115    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:03.081397    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:03.185644    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:03.494803    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:03.580066    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:03.686882    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:03.989965    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:04.078195    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:04.187011    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:04.488834    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:04.577090    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:04.690930    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:04.988450    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:05.089025    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:05.189684    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:05.492665    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:05.592165    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:05.690761    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:05.988855    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:06.084026    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:06.185271    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:06.490405    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:06.591123    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:06.690373    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:06.988385    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:07.081417    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:07.186787    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:07.505693    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:07.659153    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:07.686615    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:07.992303    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:08.076840    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:08.186332    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:08.490930    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:08.592144    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:08.687281    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:08.990682    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:09.091374    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:09.186087    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:09.488982    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:09.580926    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:09.686140    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:09.987318    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:10.080051    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:10.190307    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:10.487092    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:10.576952    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:10.863191    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:10.991082    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:11.076513    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:11.189032    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:11.487455    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:11.588046    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:11.684731    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:11.988686    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:12.076425    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:12.187913    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:12.488022    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:12.576982    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:12.687046    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:12.988952    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:23:13.090982    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:13.186602    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:13.487974    8324 kapi.go:107] duration metric: took 1m20.004802926s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 09:23:13.576910    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:13.684777    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:14.076851    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:14.185315    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:14.576631    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:14.686410    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:15.077567    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:15.186147    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:15.576846    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:15.685815    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:16.080089    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:16.185025    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:16.576076    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:16.685529    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:17.078443    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:17.186832    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:17.710318    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:17.710587    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:18.077048    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:18.185791    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:18.577215    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:18.685530    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:19.076256    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:19.186514    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:19.577600    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:19.685671    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:20.076890    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:20.185778    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:20.576563    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:20.686221    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:21.078703    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:21.185953    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:21.577189    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:21.686378    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:22.077061    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:22.185305    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:22.576015    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:22.685268    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:23.076060    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:23.185486    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:23.576447    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:23.685162    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:24.076697    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:24.186021    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:24.577823    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:24.686742    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:25.076847    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:25.185983    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:25.576181    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:25.685221    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:26.076833    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:26.185851    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:26.575522    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:26.686884    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:27.075978    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:27.185497    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:27.576578    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:27.685913    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:28.077678    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:28.186635    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:28.577322    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:28.685288    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:29.076419    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:29.185675    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:29.578700    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:29.685967    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:30.076100    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:30.184857    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:30.575721    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:30.685834    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:31.076575    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:31.186849    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:31.578283    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:31.688332    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:32.077564    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:32.185693    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:32.577796    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:32.685920    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:33.075517    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:33.186261    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:33.575918    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:33.685119    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:34.077151    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:34.185487    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:34.577188    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:34.685591    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:35.076842    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:35.186043    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:35.575701    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:35.686245    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:36.076789    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:36.186071    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:36.575813    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:36.687674    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:37.076558    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:37.185909    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:37.576541    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:37.685961    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:38.077997    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:38.185537    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:38.576484    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:38.686957    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:39.076292    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:39.185802    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:39.578269    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:39.685769    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:40.077668    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:40.186852    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:40.576397    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:40.685514    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:41.078930    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:41.185218    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:41.577145    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:41.685033    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:42.076005    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:42.185060    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:42.576897    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:42.686539    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:43.077305    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:43.185845    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:43.577563    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:43.685824    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:44.078057    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:44.185121    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:44.577723    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:44.686788    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:45.076849    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:45.186630    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:45.576721    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:45.685571    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:46.077150    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:46.185985    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:46.575378    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:46.686012    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:47.075498    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:47.185880    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:47.579101    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:47.684905    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:48.076213    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:48.185676    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:48.576776    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:48.686069    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:49.076627    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:49.186572    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:49.578362    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:49.686392    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:50.077580    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:50.186637    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:50.576453    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:50.686235    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:51.077438    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:51.185367    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:51.575915    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:51.686591    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:52.077331    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:52.185613    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:52.578778    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:52.686542    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:53.076754    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:53.185861    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:53.575539    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:53.685812    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:54.077548    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:54.185929    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:54.577000    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:54.685045    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:55.075866    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:55.186513    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:55.576859    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:55.685639    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:56.078532    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:56.185484    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:56.577158    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:56.685723    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:57.077091    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:57.185203    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:57.578182    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:57.685609    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:58.077834    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:58.186036    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:58.575845    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:58.685957    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:59.075658    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:59.186704    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:23:59.578176    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:23:59.685776    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:00.077284    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:00.185659    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:00.577813    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:00.686305    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:01.078700    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:01.186287    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:01.576340    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:01.685774    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:02.076239    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:02.185450    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:02.577008    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:02.685054    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:03.075496    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:03.186899    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:03.576040    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:03.685067    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:04.076268    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:04.186136    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:04.575632    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:04.685925    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:05.075561    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:05.186019    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:05.575976    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:05.686453    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:06.077153    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:06.185382    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:06.576448    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:06.685718    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:07.076663    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:07.186100    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:07.576181    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:07.684995    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:08.076518    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:08.186025    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:08.577443    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:08.686692    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:09.078848    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:09.186022    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:09.577515    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:09.686003    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:10.076866    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:10.186099    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:10.576854    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:10.686692    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:11.076708    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:11.186187    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:11.576446    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:11.687875    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:12.076513    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:12.185835    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:12.578531    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:12.685768    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:13.076477    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:13.186088    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:13.576032    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:13.684954    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:14.076763    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:14.185648    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:14.576753    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:14.688704    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:15.078607    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:15.188692    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:15.580381    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:15.692004    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:16.077997    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:16.187720    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:16.579191    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:16.686046    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:17.078230    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:17.186848    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:17.582000    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:17.685717    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 09:24:18.077388    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:18.186606    8324 kapi.go:107] duration metric: took 2m23.00470735s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 09:24:18.188338    8324 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-894046 cluster.
	I1123 09:24:18.189776    8324 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 09:24:18.190976    8324 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 09:24:18.578448    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:19.080011    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:19.577194    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:20.079171    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:20.578659    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:21.081272    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:21.576472    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:22.078783    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:22.578016    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:23.079408    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:23.577152    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:24.076910    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:24:24.576279    8324 kapi.go:107] duration metric: took 2m32.00413279s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 09:24:24.577913    8324 out.go:179] * Enabled addons: nvidia-device-plugin, default-storageclass, amd-gpu-device-plugin, registry-creds, storage-provisioner, ingress-dns, inspektor-gadget, cloud-spanner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1123 09:24:24.579232    8324 addons.go:530] duration metric: took 2m41.479107053s for enable addons: enabled=[nvidia-device-plugin default-storageclass amd-gpu-device-plugin registry-creds storage-provisioner ingress-dns inspektor-gadget cloud-spanner storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1123 09:24:24.579269    8324 start.go:247] waiting for cluster config update ...
	I1123 09:24:24.579288    8324 start.go:256] writing updated cluster config ...
	I1123 09:24:24.579559    8324 ssh_runner.go:195] Run: rm -f paused
	I1123 09:24:24.587025    8324 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:24:24.591200    8324 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wmj8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:24.596462    8324 pod_ready.go:94] pod "coredns-66bc5c9577-wmj8q" is "Ready"
	I1123 09:24:24.596482    8324 pod_ready.go:86] duration metric: took 5.253871ms for pod "coredns-66bc5c9577-wmj8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:24.598651    8324 pod_ready.go:83] waiting for pod "etcd-addons-894046" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:24.603874    8324 pod_ready.go:94] pod "etcd-addons-894046" is "Ready"
	I1123 09:24:24.603893    8324 pod_ready.go:86] duration metric: took 5.223685ms for pod "etcd-addons-894046" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:24.606282    8324 pod_ready.go:83] waiting for pod "kube-apiserver-addons-894046" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:24.612302    8324 pod_ready.go:94] pod "kube-apiserver-addons-894046" is "Ready"
	I1123 09:24:24.612322    8324 pod_ready.go:86] duration metric: took 6.022891ms for pod "kube-apiserver-addons-894046" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:24.614603    8324 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-894046" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:24.991854    8324 pod_ready.go:94] pod "kube-controller-manager-addons-894046" is "Ready"
	I1123 09:24:24.991880    8324 pod_ready.go:86] duration metric: took 377.259572ms for pod "kube-controller-manager-addons-894046" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:25.191967    8324 pod_ready.go:83] waiting for pod "kube-proxy-ssht6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:25.591433    8324 pod_ready.go:94] pod "kube-proxy-ssht6" is "Ready"
	I1123 09:24:25.591466    8324 pod_ready.go:86] duration metric: took 399.468388ms for pod "kube-proxy-ssht6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:25.792446    8324 pod_ready.go:83] waiting for pod "kube-scheduler-addons-894046" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:26.191959    8324 pod_ready.go:94] pod "kube-scheduler-addons-894046" is "Ready"
	I1123 09:24:26.191997    8324 pod_ready.go:86] duration metric: took 399.521407ms for pod "kube-scheduler-addons-894046" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:24:26.192030    8324 pod_ready.go:40] duration metric: took 1.604978396s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:24:26.236616    8324 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:24:26.239349    8324 out.go:179] * Done! kubectl is now configured to use "addons-894046" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.510787823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=292c7974-5d2b-40e8-ac6f-14ecb35e1651 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.511078939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a95f19248475429a7f88789c257e7615236822b0566032554c19f5c0140996df,PodSandboxId:633e34f3d8e019d1bdf20a64f92ed9aa630065a621238b5b8e41e47d009609f8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763889903930232166,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03bc90d8-f0b1-44e9-84f8-0d66efc32c7b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d4168f77a432aff7fef933ccff80c3f391117eabd4c7cdc280ac73709333af,PodSandboxId:172a78c30296c6f8b00202dc4d83191154a04747c026f47cf385534b7af032fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763889873538798385,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d0ad8ee-037a-402f-b939-85865174f054,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14501841296dc5e6129de00f5e0cadc21bf1a952d7c5866624e3940a533aee40,PodSandboxId:ad22fed4c7781e366ea52e907bba6a440b7b6e8cb1d28f08ba82234894b9dfbf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763889863868417829,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-m49v9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80f443fe-c8f3-4cae-b676-c03e5f72a6a2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9791a07f3908a611b176d120dd5d782ad6f0768bf7024c15911bd5a12faba197,PodSandboxId:8b299d76aabc53fe3d2c5fe83f1b7a5a7205cc8effb13516e906d3d136d7d7bc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763889785214913082,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-599rd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 063d91fb-90b0-4bb6-82cf-9981d0e269ad,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72f8450680de62826d10a6fdbd8401466ce82b7f3a64724078fae1756de40e2,PodSandboxId:3def69e8d79619fac415ad8f9ce428a57a242ec3a6b3f30403b91329f5ae0b1e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763889781827853226,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5q272,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 62eef597-0e76-48ab-8ab4-5b22259115be,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115185cf671ad6aa6521d97616959e3b7c417f506bc10bbff772b4c646fd1a2f,PodSandboxId:fcf8a61f8755a122d2b5d3276b945a137e6f82f2858090ae0177265f33f273ee,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763889748942344970,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308f85f4-d52f-40f0-9e7c-71a17b14ba0d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906da5d8717d2135a6a813833a64e505efabc09f7880f5d4deff154f211ee969,PodSandboxId:9d34e6682f43f8b19a389931304aff39ccd76df5b2e694fca3805b8af25b1053,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763889715714816269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-27929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f3807b-2e6b-442b-bab4-ddcdce9f67f7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d8afdb63c5ddf6432fa3e741c6bc57d468690ba2d56543ab0e7c6af1ca8fec,PodSandboxId:1c28558e8ab9c48a95ead9aee054bab4e253778d419d7e77b209076b26f851c7,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763889715252632289,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a977432-ac27-4f21-9bde-1f18c242b1b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e5d0276ae46e41900f81fcf174df96c71a933992bec1b424c48b4f98e59b80,PodSandboxId:b14dd56e0686dbf214a063ccc36313a1d412934a82b89c5afde611e7ce0749ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763889705430241834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wmj8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5347faf5-7f08-4d12-99fc-3ef5c6f51934,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eab09969b1728c3789b79aa1b8fc648fe9d81cb4fdffa9842edf4637197f40c,PodSandboxId:202d1ae55a489309ecccef9fd5b6b11cc67889b89a256f7691302da0a121f01f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763889704562534056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssht6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade4a4e1-ba9b-42bd-9ccc-c47ff039ad74,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d325e5e606dbc946a438185bdcfa7a4ed7fcb71434433140ce1cdc77e9b34c9,PodSandboxId:d30dae3d67831ed29855707b824bbf3ab09ab921ff6c01da3019aa9390326ac2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763889692765163387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cedf3b31d61119c419d2876abdfd30d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83a4e08058d4adf2200864edee478c0695656f6c37b60725e275e93de469e23b,PodSandboxId:e9e49aacea7bb36f22f6f552ca728c612ff254f97dd3ee1106b61f7a1fbab678,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763889692745228136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 626686a9273450c77ee88840310b01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9c1
12505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a7e2ee3effdc83a1777c9815054e6531fcdf24f2128f8b606d0f1310136fa3,PodSandboxId:065f59288fec24e9050afa5e89d196a3c8dfac700fa1dc9653accce984f134fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763889692755612504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8213f64cb5bfb8df8e3db5
5300d5719b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d72a69d47baa143acde7f90463438730018e772bbebb5e68979521ab8520aa3,PodSandboxId:0814278b17c7c3c3fe87c5cdfa2c3d8f3831deebf37cc6c035e73602771f34c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763889692695081393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e2f8003573af72a01402e9d3f3e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=292c7974-5d2b-40e8-ac6f-14ecb35e1651 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.547593925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c201eb39-74d0-40c8-88d5-5d2b7e15ad49 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.547671830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c201eb39-74d0-40c8-88d5-5d2b7e15ad49 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.549997340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=562165c5-439e-4cce-8204-6071e4af1aa0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.551225264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763890046551200459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=562165c5-439e-4cce-8204-6071e4af1aa0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.552432768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16a99ce5-4c23-417f-aa4f-5c0215ac41c6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.552493065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16a99ce5-4c23-417f-aa4f-5c0215ac41c6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.553140874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a95f19248475429a7f88789c257e7615236822b0566032554c19f5c0140996df,PodSandboxId:633e34f3d8e019d1bdf20a64f92ed9aa630065a621238b5b8e41e47d009609f8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763889903930232166,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03bc90d8-f0b1-44e9-84f8-0d66efc32c7b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d4168f77a432aff7fef933ccff80c3f391117eabd4c7cdc280ac73709333af,PodSandboxId:172a78c30296c6f8b00202dc4d83191154a04747c026f47cf385534b7af032fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763889873538798385,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d0ad8ee-037a-402f-b939-85865174f054,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14501841296dc5e6129de00f5e0cadc21bf1a952d7c5866624e3940a533aee40,PodSandboxId:ad22fed4c7781e366ea52e907bba6a440b7b6e8cb1d28f08ba82234894b9dfbf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763889863868417829,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-m49v9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80f443fe-c8f3-4cae-b676-c03e5f72a6a2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9791a07f3908a611b176d120dd5d782ad6f0768bf7024c15911bd5a12faba197,PodSandboxId:8b299d76aabc53fe3d2c5fe83f1b7a5a7205cc8effb13516e906d3d136d7d7bc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763889785214913082,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-599rd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 063d91fb-90b0-4bb6-82cf-9981d0e269ad,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72f8450680de62826d10a6fdbd8401466ce82b7f3a64724078fae1756de40e2,PodSandboxId:3def69e8d79619fac415ad8f9ce428a57a242ec3a6b3f30403b91329f5ae0b1e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763889781827853226,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5q272,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 62eef597-0e76-48ab-8ab4-5b22259115be,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115185cf671ad6aa6521d97616959e3b7c417f506bc10bbff772b4c646fd1a2f,PodSandboxId:fcf8a61f8755a122d2b5d3276b945a137e6f82f2858090ae0177265f33f273ee,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763889748942344970,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308f85f4-d52f-40f0-9e7c-71a17b14ba0d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906da5d8717d2135a6a813833a64e505efabc09f7880f5d4deff154f211ee969,PodSandboxId:9d34e6682f43f8b19a389931304aff39ccd76df5b2e694fca3805b8af25b1053,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763889715714816269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-27929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f3807b-2e6b-442b-bab4-ddcdce9f67f7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d8afdb63c5ddf6432fa3e741c6bc57d468690ba2d56543ab0e7c6af1ca8fec,PodSandboxId:1c28558e8ab9c48a95ead9aee054bab4e253778d419d7e77b209076b26f851c7,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763889715252632289,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a977432-ac27-4f21-9bde-1f18c242b1b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e5d0276ae46e41900f81fcf174df96c71a933992bec1b424c48b4f98e59b80,PodSandboxId:b14dd56e0686dbf214a063ccc36313a1d412934a82b89c5afde611e7ce0749ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763889705430241834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wmj8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5347faf5-7f08-4d12-99fc-3ef5c6f51934,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eab09969b1728c3789b79aa1b8fc648fe9d81cb4fdffa9842edf4637197f40c,PodSandboxId:202d1ae55a489309ecccef9fd5b6b11cc67889b89a256f7691302da0a121f01f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763889704562534056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssht6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade4a4e1-ba9b-42bd-9ccc-c47ff039ad74,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d325e5e606dbc946a438185bdcfa7a4ed7fcb71434433140ce1cdc77e9b34c9,PodSandboxId:d30dae3d67831ed29855707b824bbf3ab09ab921ff6c01da3019aa9390326ac2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763889692765163387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cedf3b31d61119c419d2876abdfd30d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83a4e08058d4adf2200864edee478c0695656f6c37b60725e275e93de469e23b,PodSandboxId:e9e49aacea7bb36f22f6f552ca728c612ff254f97dd3ee1106b61f7a1fbab678,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763889692745228136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 626686a9273450c77ee88840310b01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9c1
12505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a7e2ee3effdc83a1777c9815054e6531fcdf24f2128f8b606d0f1310136fa3,PodSandboxId:065f59288fec24e9050afa5e89d196a3c8dfac700fa1dc9653accce984f134fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763889692755612504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8213f64cb5bfb8df8e3db5
5300d5719b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d72a69d47baa143acde7f90463438730018e772bbebb5e68979521ab8520aa3,PodSandboxId:0814278b17c7c3c3fe87c5cdfa2c3d8f3831deebf37cc6c035e73602771f34c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763889692695081393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e2f8003573af72a01402e9d3f3e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16a99ce5-4c23-417f-aa4f-5c0215ac41c6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.586811068Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed87b68c-9a1c-4fbb-b786-0c4302ba5a60 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.586912545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed87b68c-9a1c-4fbb-b786-0c4302ba5a60 name=/runtime.v1.RuntimeService/Version
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.587967473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=affee2d3-f5ae-4529-be10-3567a3d910ca name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.589914704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763890046589885434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=affee2d3-f5ae-4529-be10-3567a3d910ca name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.591495411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e494f31e-e122-4d39-ba1b-6974349b2bf1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.591548433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e494f31e-e122-4d39-ba1b-6974349b2bf1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.591861254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a95f19248475429a7f88789c257e7615236822b0566032554c19f5c0140996df,PodSandboxId:633e34f3d8e019d1bdf20a64f92ed9aa630065a621238b5b8e41e47d009609f8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763889903930232166,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03bc90d8-f0b1-44e9-84f8-0d66efc32c7b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d4168f77a432aff7fef933ccff80c3f391117eabd4c7cdc280ac73709333af,PodSandboxId:172a78c30296c6f8b00202dc4d83191154a04747c026f47cf385534b7af032fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763889873538798385,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d0ad8ee-037a-402f-b939-85865174f054,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14501841296dc5e6129de00f5e0cadc21bf1a952d7c5866624e3940a533aee40,PodSandboxId:ad22fed4c7781e366ea52e907bba6a440b7b6e8cb1d28f08ba82234894b9dfbf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763889863868417829,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-m49v9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80f443fe-c8f3-4cae-b676-c03e5f72a6a2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9791a07f3908a611b176d120dd5d782ad6f0768bf7024c15911bd5a12faba197,PodSandboxId:8b299d76aabc53fe3d2c5fe83f1b7a5a7205cc8effb13516e906d3d136d7d7bc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763889785214913082,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-599rd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 063d91fb-90b0-4bb6-82cf-9981d0e269ad,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72f8450680de62826d10a6fdbd8401466ce82b7f3a64724078fae1756de40e2,PodSandboxId:3def69e8d79619fac415ad8f9ce428a57a242ec3a6b3f30403b91329f5ae0b1e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763889781827853226,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5q272,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 62eef597-0e76-48ab-8ab4-5b22259115be,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115185cf671ad6aa6521d97616959e3b7c417f506bc10bbff772b4c646fd1a2f,PodSandboxId:fcf8a61f8755a122d2b5d3276b945a137e6f82f2858090ae0177265f33f273ee,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763889748942344970,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308f85f4-d52f-40f0-9e7c-71a17b14ba0d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906da5d8717d2135a6a813833a64e505efabc09f7880f5d4deff154f211ee969,PodSandboxId:9d34e6682f43f8b19a389931304aff39ccd76df5b2e694fca3805b8af25b1053,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763889715714816269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-27929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f3807b-2e6b-442b-bab4-ddcdce9f67f7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d8afdb63c5ddf6432fa3e741c6bc57d468690ba2d56543ab0e7c6af1ca8fec,PodSandboxId:1c28558e8ab9c48a95ead9aee054bab4e253778d419d7e77b209076b26f851c7,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763889715252632289,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a977432-ac27-4f21-9bde-1f18c242b1b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e5d0276ae46e41900f81fcf174df96c71a933992bec1b424c48b4f98e59b80,PodSandboxId:b14dd56e0686dbf214a063ccc36313a1d412934a82b89c5afde611e7ce0749ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763889705430241834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wmj8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5347faf5-7f08-4d12-99fc-3ef5c6f51934,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eab09969b1728c3789b79aa1b8fc648fe9d81cb4fdffa9842edf4637197f40c,PodSandboxId:202d1ae55a489309ecccef9fd5b6b11cc67889b89a256f7691302da0a121f01f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763889704562534056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssht6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade4a4e1-ba9b-42bd-9ccc-c47ff039ad74,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d325e5e606dbc946a438185bdcfa7a4ed7fcb71434433140ce1cdc77e9b34c9,PodSandboxId:d30dae3d67831ed29855707b824bbf3ab09ab921ff6c01da3019aa9390326ac2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763889692765163387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cedf3b31d61119c419d2876abdfd30d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83a4e08058d4adf2200864edee478c0695656f6c37b60725e275e93de469e23b,PodSandboxId:e9e49aacea7bb36f22f6f552ca728c612ff254f97dd3ee1106b61f7a1fbab678,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763889692745228136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 626686a9273450c77ee88840310b01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9c1
12505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a7e2ee3effdc83a1777c9815054e6531fcdf24f2128f8b606d0f1310136fa3,PodSandboxId:065f59288fec24e9050afa5e89d196a3c8dfac700fa1dc9653accce984f134fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763889692755612504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8213f64cb5bfb8df8e3db5
5300d5719b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d72a69d47baa143acde7f90463438730018e772bbebb5e68979521ab8520aa3,PodSandboxId:0814278b17c7c3c3fe87c5cdfa2c3d8f3831deebf37cc6c035e73602771f34c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763889692695081393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e2f8003573af72a01402e9d3f3e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e494f31e-e122-4d39-ba1b-6974349b2bf1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.623131455Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61e82626-c6e7-4425-a6eb-c5f6bb0ed67d name=/runtime.v1.RuntimeService/Version
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.623218537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61e82626-c6e7-4425-a6eb-c5f6bb0ed67d name=/runtime.v1.RuntimeService/Version
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.625005709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce73f61c-1d06-41a9-99eb-a396485bbf09 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.627548509Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763890046627525716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce73f61c-1d06-41a9-99eb-a396485bbf09 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.628588736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed2f2104-37a1-4668-90fb-d4a2627a463e name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.628700229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed2f2104-37a1-4668-90fb-d4a2627a463e name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.629024066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a95f19248475429a7f88789c257e7615236822b0566032554c19f5c0140996df,PodSandboxId:633e34f3d8e019d1bdf20a64f92ed9aa630065a621238b5b8e41e47d009609f8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763889903930232166,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03bc90d8-f0b1-44e9-84f8-0d66efc32c7b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49d4168f77a432aff7fef933ccff80c3f391117eabd4c7cdc280ac73709333af,PodSandboxId:172a78c30296c6f8b00202dc4d83191154a04747c026f47cf385534b7af032fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763889873538798385,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d0ad8ee-037a-402f-b939-85865174f054,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14501841296dc5e6129de00f5e0cadc21bf1a952d7c5866624e3940a533aee40,PodSandboxId:ad22fed4c7781e366ea52e907bba6a440b7b6e8cb1d28f08ba82234894b9dfbf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763889863868417829,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-m49v9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80f443fe-c8f3-4cae-b676-c03e5f72a6a2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9791a07f3908a611b176d120dd5d782ad6f0768bf7024c15911bd5a12faba197,PodSandboxId:8b299d76aabc53fe3d2c5fe83f1b7a5a7205cc8effb13516e906d3d136d7d7bc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763889785214913082,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-599rd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 063d91fb-90b0-4bb6-82cf-9981d0e269ad,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72f8450680de62826d10a6fdbd8401466ce82b7f3a64724078fae1756de40e2,PodSandboxId:3def69e8d79619fac415ad8f9ce428a57a242ec3a6b3f30403b91329f5ae0b1e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763889781827853226,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5q272,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 62eef597-0e76-48ab-8ab4-5b22259115be,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115185cf671ad6aa6521d97616959e3b7c417f506bc10bbff772b4c646fd1a2f,PodSandboxId:fcf8a61f8755a122d2b5d3276b945a137e6f82f2858090ae0177265f33f273ee,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763889748942344970,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308f85f4-d52f-40f0-9e7c-71a17b14ba0d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906da5d8717d2135a6a813833a64e505efabc09f7880f5d4deff154f211ee969,PodSandboxId:9d34e6682f43f8b19a389931304aff39ccd76df5b2e694fca3805b8af25b1053,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763889715714816269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-27929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f3807b-2e6b-442b-bab4-ddcdce9f67f7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d8afdb63c5ddf6432fa3e741c6bc57d468690ba2d56543ab0e7c6af1ca8fec,PodSandboxId:1c28558e8ab9c48a95ead9aee054bab4e253778d419d7e77b209076b26f851c7,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763889715252632289,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a977432-ac27-4f21-9bde-1f18c242b1b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e5d0276ae46e41900f81fcf174df96c71a933992bec1b424c48b4f98e59b80,PodSandboxId:b14dd56e0686dbf214a063ccc36313a1d412934a82b89c5afde611e7ce0749ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763889705430241834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wmj8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5347faf5-7f08-4d12-99fc-3ef5c6f51934,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eab09969b1728c3789b79aa1b8fc648fe9d81cb4fdffa9842edf4637197f40c,PodSandboxId:202d1ae55a489309ecccef9fd5b6b11cc67889b89a256f7691302da0a121f01f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763889704562534056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssht6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade4a4e1-ba9b-42bd-9ccc-c47ff039ad74,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d325e5e606dbc946a438185bdcfa7a4ed7fcb71434433140ce1cdc77e9b34c9,PodSandboxId:d30dae3d67831ed29855707b824bbf3ab09ab921ff6c01da3019aa9390326ac2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763889692765163387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cedf3b31d61119c419d2876abdfd30d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83a4e08058d4adf2200864edee478c0695656f6c37b60725e275e93de469e23b,PodSandboxId:e9e49aacea7bb36f22f6f552ca728c612ff254f97dd3ee1106b61f7a1fbab678,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763889692745228136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 626686a9273450c77ee88840310b01ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9c1
12505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a7e2ee3effdc83a1777c9815054e6531fcdf24f2128f8b606d0f1310136fa3,PodSandboxId:065f59288fec24e9050afa5e89d196a3c8dfac700fa1dc9653accce984f134fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763889692755612504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8213f64cb5bfb8df8e3db5
5300d5719b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d72a69d47baa143acde7f90463438730018e772bbebb5e68979521ab8520aa3,PodSandboxId:0814278b17c7c3c3fe87c5cdfa2c3d8f3831deebf37cc6c035e73602771f34c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763889692695081393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e2f8003573af72a01402e9d3f3e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed2f2104-37a1-4668-90fb-d4a2627a463e name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.652908262Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb7bc78d-f53b-44f2-9bb6-c3667ca6c776 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 23 09:27:26 addons-894046 crio[818]: time="2025-11-23 09:27:26.654053837Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:40c116d0badf51120654e74b0090aaf9ebf902ec991830ce4fc52d5946678da5,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-jd4jj,Uid:f0d4de81-fd09-4f15-a4b4-69e9e1743df1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763890045722493246,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-jd4jj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0d4de81-fd09-4f15-a4b4-69e9e1743df1,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-23T09:27:25.406518012Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:633e34f3d8e019d1bdf20a64f92ed9aa630065a621238b5b8e41e47d009609f8,Metadata:&PodSandboxMetadata{Name:nginx,Uid:03bc90d8-f0b1-44e9-84f8-0d66efc32c7b,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1763889898335701023,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03bc90d8-f0b1-44e9-84f8-0d66efc32c7b,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-23T09:24:56.976701099Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:172a78c30296c6f8b00202dc4d83191154a04747c026f47cf385534b7af032fc,Metadata:&PodSandboxMetadata{Name:busybox,Uid:4d0ad8ee-037a-402f-b939-85865174f054,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889867151158170,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d0ad8ee-037a-402f-b939-85865174f054,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-23T09:24:26.833858278Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad22fed4c7781e366ea52
e907bba6a440b7b6e8cb1d28f08ba82234894b9dfbf,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-6c8bf45fb-m49v9,Uid:80f443fe-c8f3-4cae-b676-c03e5f72a6a2,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889848665587810,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-m49v9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80f443fe-c8f3-4cae-b676-c03e5f72a6a2,pod-template-hash: 6c8bf45fb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-23T09:21:52.208070560Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fcf8a61f8755a122d2b5d3276b945a137e6f82f2858090ae0177265f33f273ee,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:308f85f4-d52f-40f0-9e7c-71a17b14ba0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_REA
DY,CreatedAt:1763889711121472566,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308f85f4-d52f-40f0-9e7c-71a17b14ba0d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"
hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-11-23T09:21:50.029870973Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c28558e8ab9c48a95ead9aee054bab4e253778d419d7e77b209076b26f851c7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7a977432-ac27-4f21-9bde-1f18c242b1b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889710864504348,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a977432-ac27-4f21-9bde-1f18c242b1b8,},Annotations:map[string]string{kubectl.kubernet
es.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-23T09:21:50.028552315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d34e6682f43f8b19a389931304aff39ccd76df5b2e694fca3805b8af25b1053,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-27929,Uid:b7f3807b-2e6b-442b-bab4-ddcdce9f67f7,Namespace:kube-system,Attempt:0,},
State:SANDBOX_READY,CreatedAt:1763889706652647017,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-27929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f3807b-2e6b-442b-bab4-ddcdce9f67f7,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-23T09:21:46.296960843Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b14dd56e0686dbf214a063ccc36313a1d412934a82b89c5afde611e7ce0749ff,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-wmj8q,Uid:5347faf5-7f08-4d12-99fc-3ef5c6f51934,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889704357515495,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-wmj8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5347faf5-7f08-4d12-99fc-3ef5c6f51934,k8s-app: kube-dns,pod-t
emplate-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-23T09:21:44.028692315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:202d1ae55a489309ecccef9fd5b6b11cc67889b89a256f7691302da0a121f01f,Metadata:&PodSandboxMetadata{Name:kube-proxy-ssht6,Uid:ade4a4e1-ba9b-42bd-9ccc-c47ff039ad74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889704169960165,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ssht6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade4a4e1-ba9b-42bd-9ccc-c47ff039ad74,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-23T09:21:43.826803331Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d30dae3d67831ed29855707b824bbf3ab09ab921ff6c01da3019aa9390326ac2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-894046,Uid:8cedf3b31d61119c419d2876a
bdfd30d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889692539607545,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cedf3b31d61119c419d2876abdfd30d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8cedf3b31d61119c419d2876abdfd30d,kubernetes.io/config.seen: 2025-11-23T09:21:30.776598967Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9e49aacea7bb36f22f6f552ca728c612ff254f97dd3ee1106b61f7a1fbab678,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-894046,Uid:626686a9273450c77ee88840310b01ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889692537172438,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-894046,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 626686a9273450c77ee88840310b01ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 626686a9273450c77ee88840310b01ba,kubernetes.io/config.seen: 2025-11-23T09:21:30.776597852Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0814278b17c7c3c3fe87c5cdfa2c3d8f3831deebf37cc6c035e73602771f34c0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-894046,Uid:84e2f8003573af72a01402e9d3f3e9b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889692491522314,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e2f8003573af72a01402e9d3f3e9b2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.58:8443,kubernetes.io/config.hash: 84e2f8003573af72a01402e9d3f3e9b2,kubernetes.io/config.seen: 2025-11-23T09:21:30.77659
6385Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:065f59288fec24e9050afa5e89d196a3c8dfac700fa1dc9653accce984f134fd,Metadata:&PodSandboxMetadata{Name:etcd-addons-894046,Uid:8213f64cb5bfb8df8e3db55300d5719b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763889692480135035,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-894046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8213f64cb5bfb8df8e3db55300d5719b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.58:2379,kubernetes.io/config.hash: 8213f64cb5bfb8df8e3db55300d5719b,kubernetes.io/config.seen: 2025-11-23T09:21:30.776592806Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cb7bc78d-f53b-44f2-9bb6-c3667ca6c776 name=/runtime.v1.RuntimeService/ListPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	a95f192484754       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   633e34f3d8e01       nginx                                      default
	49d4168f77a43       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   172a78c30296c       busybox                                    default
	14501841296dc       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   ad22fed4c7781       ingress-nginx-controller-6c8bf45fb-m49v9   ingress-nginx
	9791a07f3908a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   4 minutes ago       Exited              patch                     0                   8b299d76aabc5       ingress-nginx-admission-patch-599rd        ingress-nginx
	b72f8450680de       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   4 minutes ago       Exited              create                    0                   3def69e8d7961       ingress-nginx-admission-create-5q272       ingress-nginx
	115185cf671ad       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   fcf8a61f8755a       kube-ingress-dns-minikube                  kube-system
	906da5d8717d2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   9d34e6682f43f       amd-gpu-device-plugin-27929                kube-system
	08d8afdb63c5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   1c28558e8ab9c       storage-provisioner                        kube-system
	98e5d0276ae46       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   b14dd56e0686d       coredns-66bc5c9577-wmj8q                   kube-system
	2eab09969b172       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   202d1ae55a489       kube-proxy-ssht6                           kube-system
	8d325e5e606db       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   d30dae3d67831       kube-scheduler-addons-894046               kube-system
	24a7e2ee3effd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   065f59288fec2       etcd-addons-894046                         kube-system
	83a4e08058d4a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   e9e49aacea7bb       kube-controller-manager-addons-894046      kube-system
	8d72a69d47baa       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   0814278b17c7c       kube-apiserver-addons-894046               kube-system
	
	
	==> coredns [98e5d0276ae46e41900f81fcf174df96c71a933992bec1b424c48b4f98e59b80] <==
	[INFO] 10.244.0.8:34152 - 18230 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000641283s
	[INFO] 10.244.0.8:34152 - 42746 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000111062s
	[INFO] 10.244.0.8:34152 - 61418 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00009798s
	[INFO] 10.244.0.8:34152 - 426 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000121038s
	[INFO] 10.244.0.8:34152 - 16013 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000087948s
	[INFO] 10.244.0.8:34152 - 17847 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000106835s
	[INFO] 10.244.0.8:34152 - 53383 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000073575s
	[INFO] 10.244.0.8:51997 - 54131 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109991s
	[INFO] 10.244.0.8:51997 - 54449 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161972s
	[INFO] 10.244.0.8:43748 - 14221 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127525s
	[INFO] 10.244.0.8:43748 - 14474 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000139986s
	[INFO] 10.244.0.8:44889 - 64843 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064382s
	[INFO] 10.244.0.8:44889 - 65062 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098044s
	[INFO] 10.244.0.8:44126 - 39783 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000224984s
	[INFO] 10.244.0.8:44126 - 40003 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068336s
	[INFO] 10.244.0.23:37309 - 20463 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000714755s
	[INFO] 10.244.0.23:47725 - 6215 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000246054s
	[INFO] 10.244.0.23:46360 - 30328 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107227s
	[INFO] 10.244.0.23:48482 - 11721 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001947332s
	[INFO] 10.244.0.23:38114 - 31249 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143857s
	[INFO] 10.244.0.23:39018 - 52213 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000321258s
	[INFO] 10.244.0.23:33011 - 30354 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004149628s
	[INFO] 10.244.0.23:54896 - 3917 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.004177354s
	[INFO] 10.244.0.27:60915 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000344645s
	[INFO] 10.244.0.27:43241 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018224s
	
	
	==> describe nodes <==
	Name:               addons-894046
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-894046
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=addons-894046
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_21_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-894046
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:21:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-894046
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:27:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:25:44 +0000   Sun, 23 Nov 2025 09:21:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:25:44 +0000   Sun, 23 Nov 2025 09:21:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:25:44 +0000   Sun, 23 Nov 2025 09:21:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:25:44 +0000   Sun, 23 Nov 2025 09:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    addons-894046
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 d90b3a700e18440ab5782bbb6d15827a
	  System UUID:                d90b3a70-0e18-440a-b578-2bbb6d15827a
	  Boot ID:                    1c93f363-8da4-418f-86a7-58e5b6503242
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     hello-world-app-5d498dc89-jd4jj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-m49v9    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m34s
	  kube-system                 amd-gpu-device-plugin-27929                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 coredns-66bc5c9577-wmj8q                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m43s
	  kube-system                 etcd-addons-894046                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m50s
	  kube-system                 kube-apiserver-addons-894046                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-controller-manager-addons-894046       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-proxy-ssht6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-scheduler-addons-894046                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m40s  kube-proxy       
	  Normal  Starting                 5m48s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m48s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m48s  kubelet          Node addons-894046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m48s  kubelet          Node addons-894046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m48s  kubelet          Node addons-894046 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m47s  kubelet          Node addons-894046 status is now: NodeReady
	  Normal  RegisteredNode           5m44s  node-controller  Node addons-894046 event: Registered Node addons-894046 in Controller
	
	
	==> dmesg <==
	[  +9.881004] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.780692] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.718097] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.035695] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.589594] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.357557] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.557082] kauditd_printk_skb: 101 callbacks suppressed
	[Nov23 09:23] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.637207] kauditd_printk_skb: 189 callbacks suppressed
	[Nov23 09:24] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.711049] kauditd_printk_skb: 65 callbacks suppressed
	[  +7.425199] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.398548] kauditd_printk_skb: 53 callbacks suppressed
	[ +10.662271] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.987516] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.030839] kauditd_printk_skb: 38 callbacks suppressed
	[Nov23 09:25] kauditd_printk_skb: 147 callbacks suppressed
	[  +0.000098] kauditd_printk_skb: 68 callbacks suppressed
	[  +2.174959] kauditd_printk_skb: 131 callbacks suppressed
	[  +3.025238] kauditd_printk_skb: 78 callbacks suppressed
	[  +2.184352] kauditd_printk_skb: 144 callbacks suppressed
	[ +12.114374] kauditd_printk_skb: 25 callbacks suppressed
	[  +0.000069] kauditd_printk_skb: 10 callbacks suppressed
	[Nov23 09:26] kauditd_printk_skb: 61 callbacks suppressed
	[Nov23 09:27] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [24a7e2ee3effdc83a1777c9815054e6531fcdf24f2128f8b606d0f1310136fa3] <==
	{"level":"warn","ts":"2025-11-23T09:22:57.752841Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.922936ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:22:57.752875Z","caller":"traceutil/trace.go:172","msg":"trace[1433449578] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1022; }","duration":"181.970185ms","start":"2025-11-23T09:22:57.570894Z","end":"2025-11-23T09:22:57.752865Z","steps":["trace[1433449578] 'agreement among raft nodes before linearized reading'  (duration: 181.851033ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:22:57.756179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.947115ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:22:57.756210Z","caller":"traceutil/trace.go:172","msg":"trace[1341980658] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1022; }","duration":"156.983864ms","start":"2025-11-23T09:22:57.599218Z","end":"2025-11-23T09:22:57.756202Z","steps":["trace[1341980658] 'agreement among raft nodes before linearized reading'  (duration: 156.856287ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:22:57.763843Z","caller":"traceutil/trace.go:172","msg":"trace[699661770] transaction","detail":"{read_only:false; response_revision:1023; number_of_response:1; }","duration":"157.836427ms","start":"2025-11-23T09:22:57.605997Z","end":"2025-11-23T09:22:57.763833Z","steps":["trace[699661770] 'process raft request'  (duration: 153.505153ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:23:10.853089Z","caller":"traceutil/trace.go:172","msg":"trace[559956947] linearizableReadLoop","detail":"{readStateIndex:1167; appliedIndex:1167; }","duration":"173.493996ms","start":"2025-11-23T09:23:10.679565Z","end":"2025-11-23T09:23:10.853059Z","steps":["trace[559956947] 'read index received'  (duration: 173.488987ms)","trace[559956947] 'applied index is now lower than readState.Index'  (duration: 4.099µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:23:10.853244Z","caller":"traceutil/trace.go:172","msg":"trace[153274798] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"232.129087ms","start":"2025-11-23T09:23:10.621103Z","end":"2025-11-23T09:23:10.853232Z","steps":["trace[153274798] 'process raft request'  (duration: 232.011883ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:23:10.853544Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.984246ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:23:10.854361Z","caller":"traceutil/trace.go:172","msg":"trace[1786808332] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1129; }","duration":"174.810356ms","start":"2025-11-23T09:23:10.679538Z","end":"2025-11-23T09:23:10.854348Z","steps":["trace[1786808332] 'agreement among raft nodes before linearized reading'  (duration: 173.961107ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:23:17.701316Z","caller":"traceutil/trace.go:172","msg":"trace[439020023] linearizableReadLoop","detail":"{readStateIndex:1187; appliedIndex:1187; }","duration":"142.117088ms","start":"2025-11-23T09:23:17.559145Z","end":"2025-11-23T09:23:17.701262Z","steps":["trace[439020023] 'read index received'  (duration: 142.111332ms)","trace[439020023] 'applied index is now lower than readState.Index'  (duration: 4.689µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:23:17.701725Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.564456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.58\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-11-23T09:23:17.701806Z","caller":"traceutil/trace.go:172","msg":"trace[1922101167] range","detail":"{range_begin:/registry/masterleases/192.168.39.58; range_end:; response_count:1; response_revision:1148; }","duration":"142.632293ms","start":"2025-11-23T09:23:17.559142Z","end":"2025-11-23T09:23:17.701774Z","steps":["trace[1922101167] 'agreement among raft nodes before linearized reading'  (duration: 142.425488ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:23:17.701835Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.233653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:23:17.701861Z","caller":"traceutil/trace.go:172","msg":"trace[1104684843] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1148; }","duration":"132.264878ms","start":"2025-11-23T09:23:17.569589Z","end":"2025-11-23T09:23:17.701854Z","steps":["trace[1104684843] 'agreement among raft nodes before linearized reading'  (duration: 132.161367ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:23:17.701570Z","caller":"traceutil/trace.go:172","msg":"trace[1908040782] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"186.117197ms","start":"2025-11-23T09:23:17.515443Z","end":"2025-11-23T09:23:17.701561Z","steps":["trace[1908040782] 'process raft request'  (duration: 186.022852ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:23:17.702162Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.770134ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:23:17.702183Z","caller":"traceutil/trace.go:172","msg":"trace[1650348259] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1148; }","duration":"104.790835ms","start":"2025-11-23T09:23:17.597385Z","end":"2025-11-23T09:23:17.702176Z","steps":["trace[1650348259] 'agreement among raft nodes before linearized reading'  (duration: 104.754909ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:24:20.256251Z","caller":"traceutil/trace.go:172","msg":"trace[1160291190] transaction","detail":"{read_only:false; response_revision:1257; number_of_response:1; }","duration":"169.982655ms","start":"2025-11-23T09:24:20.086241Z","end":"2025-11-23T09:24:20.256224Z","steps":["trace[1160291190] 'process raft request'  (duration: 169.889419ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:24:29.896700Z","caller":"traceutil/trace.go:172","msg":"trace[618875587] transaction","detail":"{read_only:false; response_revision:1292; number_of_response:1; }","duration":"246.31358ms","start":"2025-11-23T09:24:29.650373Z","end":"2025-11-23T09:24:29.896686Z","steps":["trace[618875587] 'process raft request'  (duration: 246.149538ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:24:56.202519Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.664386ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-11-23T09:24:56.202656Z","caller":"traceutil/trace.go:172","msg":"trace[2066478529] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1449; }","duration":"111.837001ms","start":"2025-11-23T09:24:56.090792Z","end":"2025-11-23T09:24:56.202629Z","steps":["trace[2066478529] 'range keys from in-memory index tree'  (duration: 111.531645ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:25:08.768695Z","caller":"traceutil/trace.go:172","msg":"trace[2021147189] transaction","detail":"{read_only:false; response_revision:1556; number_of_response:1; }","duration":"150.137098ms","start":"2025-11-23T09:25:08.618530Z","end":"2025-11-23T09:25:08.768667Z","steps":["trace[2021147189] 'process raft request'  (duration: 149.987624ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:25:29.461354Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.842516ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:1 size:636"}
	{"level":"info","ts":"2025-11-23T09:25:29.461770Z","caller":"traceutil/trace.go:172","msg":"trace[786921717] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1742; }","duration":"127.37012ms","start":"2025-11-23T09:25:29.334379Z","end":"2025-11-23T09:25:29.461749Z","steps":["trace[786921717] 'range keys from in-memory index tree'  (duration: 125.699572ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:26:00.213047Z","caller":"traceutil/trace.go:172","msg":"trace[585665072] transaction","detail":"{read_only:false; response_revision:1842; number_of_response:1; }","duration":"103.790479ms","start":"2025-11-23T09:26:00.109241Z","end":"2025-11-23T09:26:00.213031Z","steps":["trace[585665072] 'process raft request'  (duration: 103.698078ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:27:26 up 6 min,  0 users,  load average: 1.61, 1.17, 0.59
	Linux addons-894046 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8d72a69d47baa143acde7f90463438730018e772bbebb5e68979521ab8520aa3] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 09:22:33.518084       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.113.222:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.113.222:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.113.222:443: connect: connection refused" logger="UnhandledError"
	E1123 09:22:33.523438       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.113.222:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.113.222:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.113.222:443: connect: connection refused" logger="UnhandledError"
	I1123 09:22:33.589640       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 09:24:41.025635       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:41540: use of closed network connection
	E1123 09:24:41.213624       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:41568: use of closed network connection
	I1123 09:24:50.453894       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.114.226"}
	I1123 09:24:56.781908       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1123 09:24:57.040660       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.140.35"}
	I1123 09:25:34.541694       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1123 09:25:37.025733       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1123 09:25:39.684197       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1123 09:26:02.375635       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 09:26:02.375903       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1123 09:26:02.414087       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 09:26:02.414231       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1123 09:26:02.435385       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 09:26:02.435443       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1123 09:26:02.472623       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1123 09:26:02.472879       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1123 09:26:03.416412       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1123 09:26:03.472907       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1123 09:26:03.485370       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1123 09:27:25.524143       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.148.161"}
	
	
	==> kube-controller-manager [83a4e08058d4adf2200864edee478c0695656f6c37b60725e275e93de469e23b] <==
	E1123 09:26:10.926170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:26:11.789781       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:26:11.790926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1123 09:26:12.986145       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 09:26:12.986189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:26:13.053845       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 09:26:13.053876       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1123 09:26:17.912450       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:26:17.913468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:26:21.061471       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:26:21.062485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:26:22.493404       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:26:22.494436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:26:35.489091       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:26:35.490100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:26:39.340133       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:26:39.341391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:26:44.668579       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:26:44.669657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:27:07.659968       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:27:07.661130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:27:14.183466       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:27:14.184477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1123 09:27:19.937337       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1123 09:27:19.938450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [2eab09969b1728c3789b79aa1b8fc648fe9d81cb4fdffa9842edf4637197f40c] <==
	I1123 09:21:45.612120       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:21:45.716402       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:21:45.716451       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.58"]
	E1123 09:21:45.716533       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:21:46.089106       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1123 09:21:46.089930       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1123 09:21:46.093107       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:21:46.230636       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:21:46.242690       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:21:46.244573       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:21:46.293184       1 config.go:200] "Starting service config controller"
	I1123 09:21:46.293222       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:21:46.300853       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:21:46.300896       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:21:46.300918       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:21:46.300923       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:21:46.304384       1 config.go:309] "Starting node config controller"
	I1123 09:21:46.304414       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:21:46.304420       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:21:46.396483       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:21:46.405389       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:21:46.421715       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8d325e5e606dbc946a438185bdcfa7a4ed7fcb71434433140ce1cdc77e9b34c9] <==
	E1123 09:21:35.971122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:21:35.971247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:21:35.971453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:21:35.971978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:21:35.972431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:21:35.975389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:21:35.975616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:21:35.976329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:21:35.976406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:21:35.978107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:21:35.978422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:21:35.978525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:21:35.978597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:21:35.978652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:21:35.978698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:21:35.978835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:21:35.978924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:21:36.806494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:21:36.844594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:21:36.852591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:21:36.922256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:21:37.009380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:21:37.033845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:21:37.112929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1123 09:21:37.554349       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:26:05 addons-894046 kubelet[1503]: I1123 09:26:05.671656    1503 scope.go:117] "RemoveContainer" containerID="7278fbd759fd172743582ecead33bf0a77b7da209420ecce56216356cdf965d1"
	Nov 23 09:26:05 addons-894046 kubelet[1503]: I1123 09:26:05.672312    1503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7278fbd759fd172743582ecead33bf0a77b7da209420ecce56216356cdf965d1"} err="failed to get container status \"7278fbd759fd172743582ecead33bf0a77b7da209420ecce56216356cdf965d1\": rpc error: code = NotFound desc = could not find container \"7278fbd759fd172743582ecead33bf0a77b7da209420ecce56216356cdf965d1\": container with ID starting with 7278fbd759fd172743582ecead33bf0a77b7da209420ecce56216356cdf965d1 not found: ID does not exist"
	Nov 23 09:26:06 addons-894046 kubelet[1503]: I1123 09:26:06.656182    1503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f04d23c-50c0-4635-a782-6aeaaa27de24" path="/var/lib/kubelet/pods/5f04d23c-50c0-4635-a782-6aeaaa27de24/volumes"
	Nov 23 09:26:06 addons-894046 kubelet[1503]: I1123 09:26:06.657847    1503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c444a4-1f16-45de-b23b-71e500e6d45c" path="/var/lib/kubelet/pods/e4c444a4-1f16-45de-b23b-71e500e6d45c/volumes"
	Nov 23 09:26:06 addons-894046 kubelet[1503]: I1123 09:26:06.658317    1503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f511b7-972a-4e0f-b19a-4256f452e185" path="/var/lib/kubelet/pods/e9f511b7-972a-4e0f-b19a-4256f452e185/volumes"
	Nov 23 09:26:08 addons-894046 kubelet[1503]: E1123 09:26:08.970337    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763889968969819466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:08 addons-894046 kubelet[1503]: E1123 09:26:08.970364    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763889968969819466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:13 addons-894046 kubelet[1503]: I1123 09:26:13.651641    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-wmj8q" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:26:18 addons-894046 kubelet[1503]: E1123 09:26:18.973459    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763889978973008545  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:18 addons-894046 kubelet[1503]: E1123 09:26:18.973489    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763889978973008545  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:28 addons-894046 kubelet[1503]: E1123 09:26:28.976161    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763889988975665331  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:28 addons-894046 kubelet[1503]: E1123 09:26:28.976209    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763889988975665331  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:38 addons-894046 kubelet[1503]: E1123 09:26:38.978918    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763889998978494782  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:38 addons-894046 kubelet[1503]: E1123 09:26:38.979406    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763889998978494782  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:42 addons-894046 kubelet[1503]: I1123 09:26:42.652089    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-27929" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:26:48 addons-894046 kubelet[1503]: E1123 09:26:48.983124    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763890008982587454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:48 addons-894046 kubelet[1503]: E1123 09:26:48.983146    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763890008982587454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:58 addons-894046 kubelet[1503]: E1123 09:26:58.986819    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763890018986027799  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:26:58 addons-894046 kubelet[1503]: E1123 09:26:58.986865    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763890018986027799  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:27:08 addons-894046 kubelet[1503]: E1123 09:27:08.989870    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763890028989391406  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:27:08 addons-894046 kubelet[1503]: E1123 09:27:08.989905    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763890028989391406  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:27:13 addons-894046 kubelet[1503]: I1123 09:27:13.650707    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:27:18 addons-894046 kubelet[1503]: E1123 09:27:18.992618    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763890038992153256  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:27:18 addons-894046 kubelet[1503]: E1123 09:27:18.992639    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763890038992153256  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 23 09:27:25 addons-894046 kubelet[1503]: I1123 09:27:25.528814    1503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv8h7\" (UniqueName: \"kubernetes.io/projected/f0d4de81-fd09-4f15-a4b4-69e9e1743df1-kube-api-access-lv8h7\") pod \"hello-world-app-5d498dc89-jd4jj\" (UID: \"f0d4de81-fd09-4f15-a4b4-69e9e1743df1\") " pod="default/hello-world-app-5d498dc89-jd4jj"
	
	
	==> storage-provisioner [08d8afdb63c5ddf6432fa3e741c6bc57d468690ba2d56543ab0e7c6af1ca8fec] <==
	W1123 09:27:01.587547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:03.590785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:03.595731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:05.599262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:05.607947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:07.610587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:07.615174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:09.619324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:09.627941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:11.631854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:11.637520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:13.640720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:13.648775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:15.652500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:15.658938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:17.662669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:17.668253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:19.672672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:19.677589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:21.682176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:21.690059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:23.692652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:23.697386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:25.702200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:27:25.710111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-894046 -n addons-894046
helpers_test.go:269: (dbg) Run:  kubectl --context addons-894046 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-jd4jj ingress-nginx-admission-create-5q272 ingress-nginx-admission-patch-599rd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-894046 describe pod hello-world-app-5d498dc89-jd4jj ingress-nginx-admission-create-5q272 ingress-nginx-admission-patch-599rd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-894046 describe pod hello-world-app-5d498dc89-jd4jj ingress-nginx-admission-create-5q272 ingress-nginx-admission-patch-599rd: exit status 1 (68.463541ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-jd4jj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-894046/192.168.39.58
	Start Time:       Sun, 23 Nov 2025 09:27:25 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lv8h7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lv8h7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-jd4jj to addons-894046
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5q272" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-599rd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-894046 describe pod hello-world-app-5d498dc89-jd4jj ingress-nginx-admission-create-5q272 ingress-nginx-admission-patch-599rd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 addons disable ingress-dns --alsologtostderr -v=1: (1.551384522s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 addons disable ingress --alsologtostderr -v=1: (7.716186376s)
--- FAIL: TestAddons/parallel/Ingress (160.49s)

                                                
                                    
x
+
TestPreload (175.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-718211 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1123 10:14:26.894133    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-718211 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m42.832123901s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-718211 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-718211 image pull gcr.io/k8s-minikube/busybox: (6.835308593s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-718211
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-718211: (7.961437532s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-718211 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-718211 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (55.199754245s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-718211 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-11-23 10:15:57.499515038 +0000 UTC m=+3350.954210724
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-718211 -n test-preload-718211
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-718211 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-756144 ssh -n multinode-756144-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:01 UTC │
	│ ssh     │ multinode-756144 ssh -n multinode-756144 sudo cat /home/docker/cp-test_multinode-756144-m03_multinode-756144.txt                                          │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:01 UTC │
	│ cp      │ multinode-756144 cp multinode-756144-m03:/home/docker/cp-test.txt multinode-756144-m02:/home/docker/cp-test_multinode-756144-m03_multinode-756144-m02.txt │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:01 UTC │
	│ ssh     │ multinode-756144 ssh -n multinode-756144-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:01 UTC │
	│ ssh     │ multinode-756144 ssh -n multinode-756144-m02 sudo cat /home/docker/cp-test_multinode-756144-m03_multinode-756144-m02.txt                                  │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:01 UTC │
	│ node    │ multinode-756144 node stop m03                                                                                                                            │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:01 UTC │
	│ node    │ multinode-756144 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:01 UTC │ 23 Nov 25 10:02 UTC │
	│ node    │ list -p multinode-756144                                                                                                                                  │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:02 UTC │                     │
	│ stop    │ -p multinode-756144                                                                                                                                       │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:02 UTC │ 23 Nov 25 10:05 UTC │
	│ start   │ -p multinode-756144 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:05 UTC │ 23 Nov 25 10:07 UTC │
	│ node    │ list -p multinode-756144                                                                                                                                  │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │                     │
	│ node    │ multinode-756144 node delete m03                                                                                                                          │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │ 23 Nov 25 10:07 UTC │
	│ stop    │ multinode-756144 stop                                                                                                                                     │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:07 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p multinode-756144 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:12 UTC │
	│ node    │ list -p multinode-756144                                                                                                                                  │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ start   │ -p multinode-756144-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-756144-m02 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ start   │ -p multinode-756144-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-756144-m03 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ node    │ add -p multinode-756144                                                                                                                                   │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ delete  │ -p multinode-756144-m03                                                                                                                                   │ multinode-756144-m03 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ delete  │ -p multinode-756144                                                                                                                                       │ multinode-756144     │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p test-preload-718211 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-718211  │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ image   │ test-preload-718211 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-718211  │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ stop    │ -p test-preload-718211                                                                                                                                    │ test-preload-718211  │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:15 UTC │
	│ start   │ -p test-preload-718211 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-718211  │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:15 UTC │
	│ image   │ test-preload-718211 image list                                                                                                                            │ test-preload-718211  │ jenkins │ v1.37.0 │ 23 Nov 25 10:15 UTC │ 23 Nov 25 10:15 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:15:02
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:15:02.174472   31043 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:15:02.174697   31043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:15:02.174706   31043 out.go:374] Setting ErrFile to fd 2...
	I1123 10:15:02.174710   31043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:15:02.174882   31043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 10:15:02.175299   31043 out.go:368] Setting JSON to false
	I1123 10:15:02.176151   31043 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3440,"bootTime":1763889462,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:15:02.176207   31043 start.go:143] virtualization: kvm guest
	I1123 10:15:02.178127   31043 out.go:179] * [test-preload-718211] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:15:02.179497   31043 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:15:02.179494   31043 notify.go:221] Checking for updates...
	I1123 10:15:02.180777   31043 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:15:02.182120   31043 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 10:15:02.183290   31043 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 10:15:02.184517   31043 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:15:02.185731   31043 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:15:02.187522   31043 config.go:182] Loaded profile config "test-preload-718211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:15:02.189595   31043 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 10:15:02.190786   31043 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:15:02.225632   31043 out.go:179] * Using the kvm2 driver based on existing profile
	I1123 10:15:02.226784   31043 start.go:309] selected driver: kvm2
	I1123 10:15:02.226798   31043 start.go:927] validating driver "kvm2" against &{Name:test-preload-718211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-718211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:15:02.226891   31043 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:15:02.227766   31043 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:15:02.227796   31043 cni.go:84] Creating CNI manager for ""
	I1123 10:15:02.227886   31043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 10:15:02.227931   31043 start.go:353] cluster config:
	{Name:test-preload-718211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-718211 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:15:02.228042   31043 iso.go:125] acquiring lock: {Name:mkda1f2156fa5a41237d44afe14c60be86e641cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:15:02.229425   31043 out.go:179] * Starting "test-preload-718211" primary control-plane node in "test-preload-718211" cluster
	I1123 10:15:02.230566   31043 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:15:02.891875   31043 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1123 10:15:02.891905   31043 cache.go:65] Caching tarball of preloaded images
	I1123 10:15:02.892110   31043 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:15:02.894198   31043 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1123 10:15:02.895434   31043 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 10:15:03.054572   31043 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1123 10:15:03.054619   31043 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1123 10:15:17.948042   31043 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1123 10:15:17.948166   31043 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/config.json ...
	I1123 10:15:17.948390   31043 start.go:360] acquireMachinesLock for test-preload-718211: {Name:mk3faa1cfbcacb62e9602286e0ef7afeec78d5f2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1123 10:15:17.948456   31043 start.go:364] duration metric: took 47.175µs to acquireMachinesLock for "test-preload-718211"
	I1123 10:15:17.948472   31043 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:15:17.948477   31043 fix.go:54] fixHost starting: 
	I1123 10:15:17.950309   31043 fix.go:112] recreateIfNeeded on test-preload-718211: state=Stopped err=<nil>
	W1123 10:15:17.950331   31043 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:15:17.952162   31043 out.go:252] * Restarting existing kvm2 VM for "test-preload-718211" ...
	I1123 10:15:17.952198   31043 main.go:143] libmachine: starting domain...
	I1123 10:15:17.952206   31043 main.go:143] libmachine: ensuring networks are active...
	I1123 10:15:17.952952   31043 main.go:143] libmachine: Ensuring network default is active
	I1123 10:15:17.953302   31043 main.go:143] libmachine: Ensuring network mk-test-preload-718211 is active
	I1123 10:15:17.953700   31043 main.go:143] libmachine: getting domain XML...
	I1123 10:15:17.954643   31043 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-718211</name>
	  <uuid>a9936135-1e0d-448c-b623-201412d92ab6</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21968-3638/.minikube/machines/test-preload-718211/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21968-3638/.minikube/machines/test-preload-718211/test-preload-718211.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:23:92:1d'/>
	      <source network='mk-test-preload-718211'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b5:03:e7'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1123 10:15:19.197669   31043 main.go:143] libmachine: waiting for domain to start...
	I1123 10:15:19.199114   31043 main.go:143] libmachine: domain is now running
	I1123 10:15:19.199130   31043 main.go:143] libmachine: waiting for IP...
	I1123 10:15:19.199987   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:19.200580   31043 main.go:143] libmachine: domain test-preload-718211 has current primary IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:19.200594   31043 main.go:143] libmachine: found domain IP: 192.168.39.170
	I1123 10:15:19.200602   31043 main.go:143] libmachine: reserving static IP address...
	I1123 10:15:19.201051   31043 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-718211", mac: "52:54:00:23:92:1d", ip: "192.168.39.170"} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:13:20 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:19.201077   31043 main.go:143] libmachine: skip adding static IP to network mk-test-preload-718211 - found existing host DHCP lease matching {name: "test-preload-718211", mac: "52:54:00:23:92:1d", ip: "192.168.39.170"}
	I1123 10:15:19.201087   31043 main.go:143] libmachine: reserved static IP address 192.168.39.170 for domain test-preload-718211
	I1123 10:15:19.201092   31043 main.go:143] libmachine: waiting for SSH...
	I1123 10:15:19.201106   31043 main.go:143] libmachine: Getting to WaitForSSH function...
	I1123 10:15:19.203510   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:19.203838   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:13:20 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:19.203857   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:19.204029   31043 main.go:143] libmachine: Using SSH client type: native
	I1123 10:15:19.204267   31043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1123 10:15:19.204280   31043 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1123 10:15:22.307201   31043 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.170:22: connect: no route to host
	I1123 10:15:28.387327   31043 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.170:22: connect: no route to host
	I1123 10:15:31.494431   31043 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:15:31.497917   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.498481   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:31.498520   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.498820   31043 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/config.json ...
	I1123 10:15:31.499128   31043 machine.go:94] provisionDockerMachine start ...
	I1123 10:15:31.501803   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.502248   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:31.502287   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.502527   31043 main.go:143] libmachine: Using SSH client type: native
	I1123 10:15:31.502800   31043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1123 10:15:31.502820   31043 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:15:31.623449   31043 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1123 10:15:31.623490   31043 buildroot.go:166] provisioning hostname "test-preload-718211"
	I1123 10:15:31.626244   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.626647   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:31.626681   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.626859   31043 main.go:143] libmachine: Using SSH client type: native
	I1123 10:15:31.627154   31043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1123 10:15:31.627170   31043 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-718211 && echo "test-preload-718211" | sudo tee /etc/hostname
	I1123 10:15:31.756077   31043 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-718211
	
	I1123 10:15:31.759210   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.759690   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:31.759742   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.759983   31043 main.go:143] libmachine: Using SSH client type: native
	I1123 10:15:31.760189   31043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1123 10:15:31.760206   31043 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-718211' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-718211/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-718211' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:15:31.878931   31043 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:15:31.878980   31043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21968-3638/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-3638/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-3638/.minikube}
	I1123 10:15:31.879003   31043 buildroot.go:174] setting up certificates
	I1123 10:15:31.879015   31043 provision.go:84] configureAuth start
	I1123 10:15:31.882079   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.882479   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:31.882502   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.884897   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.885299   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:31.885327   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.885447   31043 provision.go:143] copyHostCerts
	I1123 10:15:31.885488   31043 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3638/.minikube/ca.pem, removing ...
	I1123 10:15:31.885503   31043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3638/.minikube/ca.pem
	I1123 10:15:31.885562   31043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-3638/.minikube/ca.pem (1078 bytes)
	I1123 10:15:31.885656   31043 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3638/.minikube/cert.pem, removing ...
	I1123 10:15:31.885667   31043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3638/.minikube/cert.pem
	I1123 10:15:31.885693   31043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-3638/.minikube/cert.pem (1123 bytes)
	I1123 10:15:31.885745   31043 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-3638/.minikube/key.pem, removing ...
	I1123 10:15:31.885752   31043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-3638/.minikube/key.pem
	I1123 10:15:31.885773   31043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-3638/.minikube/key.pem (1679 bytes)
	I1123 10:15:31.885820   31043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-3638/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca-key.pem org=jenkins.test-preload-718211 san=[127.0.0.1 192.168.39.170 localhost minikube test-preload-718211]
	I1123 10:15:31.995092   31043 provision.go:177] copyRemoteCerts
	I1123 10:15:31.995147   31043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:15:31.997872   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.998359   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:31.998391   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:31.998582   31043 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/test-preload-718211/id_rsa Username:docker}
	I1123 10:15:32.087082   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:15:32.121464   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 10:15:32.157380   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:15:32.210384   31043 provision.go:87] duration metric: took 331.357604ms to configureAuth
	I1123 10:15:32.210413   31043 buildroot.go:189] setting minikube options for container-runtime
	I1123 10:15:32.210589   31043 config.go:182] Loaded profile config "test-preload-718211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:15:32.213561   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.214033   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:32.214062   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.214275   31043 main.go:143] libmachine: Using SSH client type: native
	I1123 10:15:32.214481   31043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1123 10:15:32.214506   31043 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:15:32.476228   31043 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:15:32.476253   31043 machine.go:97] duration metric: took 977.109009ms to provisionDockerMachine
	I1123 10:15:32.476267   31043 start.go:293] postStartSetup for "test-preload-718211" (driver="kvm2")
	I1123 10:15:32.476279   31043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:15:32.476342   31043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:15:32.479322   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.479745   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:32.479770   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.479901   31043 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/test-preload-718211/id_rsa Username:docker}
	I1123 10:15:32.561459   31043 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:15:32.566314   31043 info.go:137] Remote host: Buildroot 2025.02
	I1123 10:15:32.566339   31043 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3638/.minikube/addons for local assets ...
	I1123 10:15:32.566414   31043 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-3638/.minikube/files for local assets ...
	I1123 10:15:32.566515   31043 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-3638/.minikube/files/etc/ssl/certs/75902.pem -> 75902.pem in /etc/ssl/certs
	I1123 10:15:32.566611   31043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:15:32.578674   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/files/etc/ssl/certs/75902.pem --> /etc/ssl/certs/75902.pem (1708 bytes)
	I1123 10:15:32.608682   31043 start.go:296] duration metric: took 132.399191ms for postStartSetup
	I1123 10:15:32.608732   31043 fix.go:56] duration metric: took 14.660252826s for fixHost
	I1123 10:15:32.611306   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.611664   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:32.611683   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.611830   31043 main.go:143] libmachine: Using SSH client type: native
	I1123 10:15:32.612074   31043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1123 10:15:32.612088   31043 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1123 10:15:32.712512   31043 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763892932.668439468
	
	I1123 10:15:32.712538   31043 fix.go:216] guest clock: 1763892932.668439468
	I1123 10:15:32.712549   31043 fix.go:229] Guest: 2025-11-23 10:15:32.668439468 +0000 UTC Remote: 2025-11-23 10:15:32.608737661 +0000 UTC m=+30.482854145 (delta=59.701807ms)
	I1123 10:15:32.712570   31043 fix.go:200] guest clock delta is within tolerance: 59.701807ms
	I1123 10:15:32.712579   31043 start.go:83] releasing machines lock for "test-preload-718211", held for 14.764111665s
	I1123 10:15:32.715320   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.715701   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:32.715722   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.716285   31043 ssh_runner.go:195] Run: cat /version.json
	I1123 10:15:32.716311   31043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:15:32.718989   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.719119   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.719350   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:32.719386   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.719473   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:32.719503   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:32.719530   31043 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/test-preload-718211/id_rsa Username:docker}
	I1123 10:15:32.719734   31043 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/test-preload-718211/id_rsa Username:docker}
	I1123 10:15:32.819131   31043 ssh_runner.go:195] Run: systemctl --version
	I1123 10:15:32.825418   31043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:15:32.970543   31043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:15:32.977988   31043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:15:32.978046   31043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:15:32.998040   31043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 10:15:32.998064   31043 start.go:496] detecting cgroup driver to use...
	I1123 10:15:32.998120   31043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:15:33.017676   31043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:15:33.034391   31043 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:15:33.034441   31043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:15:33.057015   31043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:15:33.072884   31043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:15:33.219384   31043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:15:33.434164   31043 docker.go:234] disabling docker service ...
	I1123 10:15:33.434236   31043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:15:33.450708   31043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:15:33.466201   31043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:15:33.620204   31043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:15:33.763047   31043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:15:33.778526   31043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:15:33.801122   31043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1123 10:15:33.801200   31043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:15:33.813804   31043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:15:33.813856   31043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:15:33.825967   31043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:15:33.837919   31043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:15:33.849848   31043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:15:33.862289   31043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:15:33.874218   31043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:15:33.894096   31043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:15:33.906583   31043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:15:33.916714   31043 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1123 10:15:33.916776   31043 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1123 10:15:33.941791   31043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:15:33.957000   31043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:15:34.103807   31043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:15:34.211585   31043 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:15:34.211660   31043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:15:34.217818   31043 start.go:564] Will wait 60s for crictl version
	I1123 10:15:34.217886   31043 ssh_runner.go:195] Run: which crictl
	I1123 10:15:34.222074   31043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1123 10:15:34.257607   31043 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1123 10:15:34.257697   31043 ssh_runner.go:195] Run: crio --version
	I1123 10:15:34.288901   31043 ssh_runner.go:195] Run: crio --version
	I1123 10:15:34.319813   31043 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1123 10:15:34.323552   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:34.323885   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:34.323904   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:34.324082   31043 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1123 10:15:34.328697   31043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:15:34.343709   31043 kubeadm.go:884] updating cluster {Name:test-preload-718211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-718211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:15:34.343833   31043 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1123 10:15:34.343879   31043 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:15:34.379209   31043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1123 10:15:34.379283   31043 ssh_runner.go:195] Run: which lz4
	I1123 10:15:34.383752   31043 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1123 10:15:34.388722   31043 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1123 10:15:34.388755   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1123 10:15:35.906298   31043 crio.go:462] duration metric: took 1.522584019s to copy over tarball
	I1123 10:15:35.906377   31043 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1123 10:15:37.553358   31043 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.646944303s)
	I1123 10:15:37.553394   31043 crio.go:469] duration metric: took 1.647069911s to extract the tarball
	I1123 10:15:37.553404   31043 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1123 10:15:37.593972   31043 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:15:37.635632   31043 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:15:37.635661   31043 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:15:37.635672   31043 kubeadm.go:935] updating node { 192.168.39.170 8443 v1.32.0 crio true true} ...
	I1123 10:15:37.635776   31043 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-718211 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-718211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:15:37.635854   31043 ssh_runner.go:195] Run: crio config
	I1123 10:15:37.682799   31043 cni.go:84] Creating CNI manager for ""
	I1123 10:15:37.682826   31043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 10:15:37.682845   31043 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:15:37.682868   31043 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-718211 NodeName:test-preload-718211 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:15:37.683030   31043 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-718211"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.170"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:15:37.683111   31043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1123 10:15:37.695495   31043 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:15:37.695566   31043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:15:37.707420   31043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1123 10:15:37.728122   31043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:15:37.748639   31043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1123 10:15:37.769128   31043 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I1123 10:15:37.773358   31043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:15:37.787714   31043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:15:37.932397   31043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:15:37.976433   31043 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211 for IP: 192.168.39.170
	I1123 10:15:37.976464   31043 certs.go:195] generating shared ca certs ...
	I1123 10:15:37.976484   31043 certs.go:227] acquiring lock for ca certs: {Name:mkc236b2df9db5d23fb877d4ca5dc928e3eefed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:15:37.976619   31043 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-3638/.minikube/ca.key
	I1123 10:15:37.976664   31043 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.key
	I1123 10:15:37.976674   31043 certs.go:257] generating profile certs ...
	I1123 10:15:37.976751   31043 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/client.key
	I1123 10:15:37.976799   31043 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/apiserver.key.c64c64df
	I1123 10:15:37.976837   31043 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/proxy-client.key
	I1123 10:15:37.976966   31043 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/7590.pem (1338 bytes)
	W1123 10:15:37.977010   31043 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-3638/.minikube/certs/7590_empty.pem, impossibly tiny 0 bytes
	I1123 10:15:37.977018   31043 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:15:37.977045   31043 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:15:37.977069   31043 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:15:37.977090   31043 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/certs/key.pem (1679 bytes)
	I1123 10:15:37.977128   31043 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-3638/.minikube/files/etc/ssl/certs/75902.pem (1708 bytes)
	I1123 10:15:37.977691   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:15:38.019315   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:15:38.056012   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:15:38.085442   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:15:38.115363   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:15:38.145481   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1123 10:15:38.174397   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:15:38.203025   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:15:38.232111   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/certs/7590.pem --> /usr/share/ca-certificates/7590.pem (1338 bytes)
	I1123 10:15:38.260197   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/files/etc/ssl/certs/75902.pem --> /usr/share/ca-certificates/75902.pem (1708 bytes)
	I1123 10:15:38.288806   31043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:15:38.318540   31043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:15:38.339373   31043 ssh_runner.go:195] Run: openssl version
	I1123 10:15:38.346462   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:15:38.360337   31043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:15:38.365704   31043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 09:21 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:15:38.365755   31043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:15:38.373180   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:15:38.386687   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7590.pem && ln -fs /usr/share/ca-certificates/7590.pem /etc/ssl/certs/7590.pem"
	I1123 10:15:38.400548   31043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7590.pem
	I1123 10:15:38.406006   31043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:29 /usr/share/ca-certificates/7590.pem
	I1123 10:15:38.406070   31043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7590.pem
	I1123 10:15:38.413712   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7590.pem /etc/ssl/certs/51391683.0"
	I1123 10:15:38.427321   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75902.pem && ln -fs /usr/share/ca-certificates/75902.pem /etc/ssl/certs/75902.pem"
	I1123 10:15:38.441034   31043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75902.pem
	I1123 10:15:38.446353   31043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:29 /usr/share/ca-certificates/75902.pem
	I1123 10:15:38.446402   31043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75902.pem
	I1123 10:15:38.453704   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75902.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:15:38.467001   31043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:15:38.472365   31043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:15:38.479888   31043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:15:38.487122   31043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:15:38.494444   31043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:15:38.501726   31043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:15:38.509189   31043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:15:38.516405   31043 kubeadm.go:401] StartCluster: {Name:test-preload-718211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-718211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:15:38.516490   31043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:15:38.516535   31043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:15:38.551336   31043 cri.go:89] found id: ""
	I1123 10:15:38.551483   31043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:15:38.564329   31043 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:15:38.564349   31043 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:15:38.564403   31043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:15:38.576869   31043 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:15:38.577311   31043 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-718211" does not appear in /home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 10:15:38.577412   31043 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-3638/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-718211" cluster setting kubeconfig missing "test-preload-718211" context setting]
	I1123 10:15:38.577719   31043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/kubeconfig: {Name:mk064b50b49499ad2e4fbd86fe10fb95b12274a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:15:38.578198   31043 kapi.go:59] client config for test-preload-718211: &rest.Config{Host:"https://192.168.39.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/client.key", CAFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:15:38.578643   31043 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 10:15:38.578659   31043 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 10:15:38.578663   31043 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 10:15:38.578667   31043 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 10:15:38.578671   31043 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 10:15:38.579009   31043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:15:38.591048   31043 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.170
	I1123 10:15:38.591085   31043 kubeadm.go:1161] stopping kube-system containers ...
	I1123 10:15:38.591098   31043 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1123 10:15:38.591147   31043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:15:38.625868   31043 cri.go:89] found id: ""
	I1123 10:15:38.625961   31043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1123 10:15:38.644984   31043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:15:38.657377   31043 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:15:38.657400   31043 kubeadm.go:158] found existing configuration files:
	
	I1123 10:15:38.657457   31043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:15:38.668654   31043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:15:38.668720   31043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:15:38.680554   31043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:15:38.691718   31043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:15:38.691805   31043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:15:38.703858   31043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:15:38.714993   31043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:15:38.715049   31043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:15:38.728907   31043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:15:38.741982   31043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:15:38.742034   31043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:15:38.755587   31043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:15:38.769328   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:15:38.828559   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:15:39.856996   31043 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.02840248s)
	I1123 10:15:39.857060   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:15:40.114540   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:15:40.194062   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:15:40.282287   31043 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:15:40.282354   31043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:15:40.782882   31043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:15:41.282827   31043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:15:41.783314   31043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:15:42.283065   31043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:15:42.782680   31043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:15:42.821357   31043 api_server.go:72] duration metric: took 2.539078379s to wait for apiserver process to appear ...
	I1123 10:15:42.821390   31043 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:15:42.821416   31043 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1123 10:15:45.660065   31043 api_server.go:279] https://192.168.39.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:15:45.660092   31043 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:15:45.660106   31043 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1123 10:15:45.690260   31043 api_server.go:279] https://192.168.39.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 10:15:45.690288   31043 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 10:15:45.821588   31043 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1123 10:15:45.826038   31043 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:15:45.826060   31043 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:15:46.321688   31043 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1123 10:15:46.333490   31043 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:15:46.333513   31043 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:15:46.822087   31043 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1123 10:15:46.834097   31043 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:15:46.834118   31043 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:15:47.321915   31043 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1123 10:15:47.326037   31043 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I1123 10:15:47.332679   31043 api_server.go:141] control plane version: v1.32.0
	I1123 10:15:47.332704   31043 api_server.go:131] duration metric: took 4.511307008s to wait for apiserver health ...
	I1123 10:15:47.332713   31043 cni.go:84] Creating CNI manager for ""
	I1123 10:15:47.332718   31043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 10:15:47.334334   31043 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1123 10:15:47.335515   31043 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1123 10:15:47.348330   31043 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1123 10:15:47.379579   31043 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:15:47.386493   31043 system_pods.go:59] 7 kube-system pods found
	I1123 10:15:47.386524   31043 system_pods.go:61] "coredns-668d6bf9bc-kmv7f" [fec76645-d2d7-4d67-ab44-5fcb2024a33f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:15:47.386532   31043 system_pods.go:61] "etcd-test-preload-718211" [5623dfa7-2bd0-412c-ac2f-86f473f0c792] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:15:47.386547   31043 system_pods.go:61] "kube-apiserver-test-preload-718211" [51d6aaa8-2341-4313-8d70-e521cd58ba2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:15:47.386555   31043 system_pods.go:61] "kube-controller-manager-test-preload-718211" [146594c7-84b4-4cd1-8060-1ef52846f298] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:15:47.386566   31043 system_pods.go:61] "kube-proxy-4ht8c" [ce2e77f2-6683-4646-b578-6216c86593fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:15:47.386583   31043 system_pods.go:61] "kube-scheduler-test-preload-718211" [a4a759e0-b2ca-45c7-9b4c-846c85f09d53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:15:47.386588   31043 system_pods.go:61] "storage-provisioner" [a1456787-8da1-469e-9584-dc480344f163] Running
	I1123 10:15:47.386597   31043 system_pods.go:74] duration metric: took 6.990329ms to wait for pod list to return data ...
	I1123 10:15:47.386608   31043 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:15:47.394075   31043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1123 10:15:47.394097   31043 node_conditions.go:123] node cpu capacity is 2
	I1123 10:15:47.394109   31043 node_conditions.go:105] duration metric: took 7.493031ms to run NodePressure ...
	I1123 10:15:47.394181   31043 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 10:15:47.668865   31043 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1123 10:15:47.671987   31043 kubeadm.go:744] kubelet initialised
	I1123 10:15:47.672004   31043 kubeadm.go:745] duration metric: took 3.114041ms waiting for restarted kubelet to initialise ...
	I1123 10:15:47.672020   31043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:15:47.700373   31043 ops.go:34] apiserver oom_adj: -16
	I1123 10:15:47.700399   31043 kubeadm.go:602] duration metric: took 9.136042166s to restartPrimaryControlPlane
	I1123 10:15:47.700410   31043 kubeadm.go:403] duration metric: took 9.18402058s to StartCluster
	I1123 10:15:47.700450   31043 settings.go:142] acquiring lock: {Name:mkda898dc919f319fca5c9c62e0026647031093a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:15:47.700558   31043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 10:15:47.701451   31043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-3638/kubeconfig: {Name:mk064b50b49499ad2e4fbd86fe10fb95b12274a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:15:47.701732   31043 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:15:47.701812   31043 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:15:47.701902   31043 addons.go:70] Setting storage-provisioner=true in profile "test-preload-718211"
	I1123 10:15:47.701923   31043 addons.go:239] Setting addon storage-provisioner=true in "test-preload-718211"
	W1123 10:15:47.701932   31043 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:15:47.701927   31043 addons.go:70] Setting default-storageclass=true in profile "test-preload-718211"
	I1123 10:15:47.701971   31043 host.go:66] Checking if "test-preload-718211" exists ...
	I1123 10:15:47.701976   31043 config.go:182] Loaded profile config "test-preload-718211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1123 10:15:47.701980   31043 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-718211"
	I1123 10:15:47.704332   31043 out.go:179] * Verifying Kubernetes components...
	I1123 10:15:47.704610   31043 kapi.go:59] client config for test-preload-718211: &rest.Config{Host:"https://192.168.39.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/client.key", CAFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:15:47.704871   31043 addons.go:239] Setting addon default-storageclass=true in "test-preload-718211"
	W1123 10:15:47.704884   31043 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:15:47.704900   31043 host.go:66] Checking if "test-preload-718211" exists ...
	I1123 10:15:47.705732   31043 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:15:47.705781   31043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:15:47.706382   31043 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:15:47.706401   31043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:15:47.707003   31043 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:15:47.707024   31043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:15:47.709713   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:47.710161   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:47.710191   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:47.710314   31043 main.go:143] libmachine: domain test-preload-718211 has defined MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:47.710325   31043 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/test-preload-718211/id_rsa Username:docker}
	I1123 10:15:47.710772   31043 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:92:1d", ip: ""} in network mk-test-preload-718211: {Iface:virbr1 ExpiryTime:2025-11-23 11:15:29 +0000 UTC Type:0 Mac:52:54:00:23:92:1d Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-718211 Clientid:01:52:54:00:23:92:1d}
	I1123 10:15:47.710807   31043 main.go:143] libmachine: domain test-preload-718211 has defined IP address 192.168.39.170 and MAC address 52:54:00:23:92:1d in network mk-test-preload-718211
	I1123 10:15:47.711017   31043 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/test-preload-718211/id_rsa Username:docker}
	I1123 10:15:47.953515   31043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:15:47.972542   31043 node_ready.go:35] waiting up to 6m0s for node "test-preload-718211" to be "Ready" ...
	I1123 10:15:47.975355   31043 node_ready.go:49] node "test-preload-718211" is "Ready"
	I1123 10:15:47.975374   31043 node_ready.go:38] duration metric: took 2.800554ms for node "test-preload-718211" to be "Ready" ...
	I1123 10:15:47.975385   31043 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:15:47.975440   31043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:15:47.995825   31043 api_server.go:72] duration metric: took 294.057445ms to wait for apiserver process to appear ...
	I1123 10:15:47.995853   31043 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:15:47.995872   31043 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1123 10:15:48.002278   31043 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I1123 10:15:48.003140   31043 api_server.go:141] control plane version: v1.32.0
	I1123 10:15:48.003162   31043 api_server.go:131] duration metric: took 7.301898ms to wait for apiserver health ...
	I1123 10:15:48.003170   31043 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:15:48.006643   31043 system_pods.go:59] 7 kube-system pods found
	I1123 10:15:48.006670   31043 system_pods.go:61] "coredns-668d6bf9bc-kmv7f" [fec76645-d2d7-4d67-ab44-5fcb2024a33f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:15:48.006680   31043 system_pods.go:61] "etcd-test-preload-718211" [5623dfa7-2bd0-412c-ac2f-86f473f0c792] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:15:48.006699   31043 system_pods.go:61] "kube-apiserver-test-preload-718211" [51d6aaa8-2341-4313-8d70-e521cd58ba2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:15:48.006709   31043 system_pods.go:61] "kube-controller-manager-test-preload-718211" [146594c7-84b4-4cd1-8060-1ef52846f298] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:15:48.006716   31043 system_pods.go:61] "kube-proxy-4ht8c" [ce2e77f2-6683-4646-b578-6216c86593fa] Running
	I1123 10:15:48.006722   31043 system_pods.go:61] "kube-scheduler-test-preload-718211" [a4a759e0-b2ca-45c7-9b4c-846c85f09d53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:15:48.006733   31043 system_pods.go:61] "storage-provisioner" [a1456787-8da1-469e-9584-dc480344f163] Running
	I1123 10:15:48.006740   31043 system_pods.go:74] duration metric: took 3.56432ms to wait for pod list to return data ...
	I1123 10:15:48.006746   31043 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:15:48.009284   31043 default_sa.go:45] found service account: "default"
	I1123 10:15:48.009299   31043 default_sa.go:55] duration metric: took 2.547851ms for default service account to be created ...
	I1123 10:15:48.009305   31043 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:15:48.012593   31043 system_pods.go:86] 7 kube-system pods found
	I1123 10:15:48.012617   31043 system_pods.go:89] "coredns-668d6bf9bc-kmv7f" [fec76645-d2d7-4d67-ab44-5fcb2024a33f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:15:48.012626   31043 system_pods.go:89] "etcd-test-preload-718211" [5623dfa7-2bd0-412c-ac2f-86f473f0c792] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:15:48.012634   31043 system_pods.go:89] "kube-apiserver-test-preload-718211" [51d6aaa8-2341-4313-8d70-e521cd58ba2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:15:48.012639   31043 system_pods.go:89] "kube-controller-manager-test-preload-718211" [146594c7-84b4-4cd1-8060-1ef52846f298] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:15:48.012643   31043 system_pods.go:89] "kube-proxy-4ht8c" [ce2e77f2-6683-4646-b578-6216c86593fa] Running
	I1123 10:15:48.012648   31043 system_pods.go:89] "kube-scheduler-test-preload-718211" [a4a759e0-b2ca-45c7-9b4c-846c85f09d53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:15:48.012652   31043 system_pods.go:89] "storage-provisioner" [a1456787-8da1-469e-9584-dc480344f163] Running
	I1123 10:15:48.012659   31043 system_pods.go:126] duration metric: took 3.349389ms to wait for k8s-apps to be running ...
	I1123 10:15:48.012665   31043 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:15:48.012703   31043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:15:48.029758   31043 system_svc.go:56] duration metric: took 17.087717ms WaitForService to wait for kubelet
	I1123 10:15:48.029780   31043 kubeadm.go:587] duration metric: took 328.0176ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:15:48.029795   31043 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:15:48.032271   31043 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1123 10:15:48.032294   31043 node_conditions.go:123] node cpu capacity is 2
	I1123 10:15:48.032307   31043 node_conditions.go:105] duration metric: took 2.507663ms to run NodePressure ...
	I1123 10:15:48.032323   31043 start.go:242] waiting for startup goroutines ...
	I1123 10:15:48.170548   31043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:15:48.176184   31043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:15:48.923799   31043 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 10:15:48.925285   31043 addons.go:530] duration metric: took 1.223457406s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 10:15:48.925327   31043 start.go:247] waiting for cluster config update ...
	I1123 10:15:48.925341   31043 start.go:256] writing updated cluster config ...
	I1123 10:15:48.925642   31043 ssh_runner.go:195] Run: rm -f paused
	I1123 10:15:48.934489   31043 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:15:48.934967   31043 kapi.go:59] client config for test-preload-718211: &rest.Config{Host:"https://192.168.39.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/client.crt", KeyFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/profiles/test-preload-718211/client.key", CAFile:"/home/jenkins/minikube-integration/21968-3638/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 10:15:48.959301   31043 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-kmv7f" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:15:50.964773   31043 pod_ready.go:104] pod "coredns-668d6bf9bc-kmv7f" is not "Ready", error: <nil>
	W1123 10:15:52.965764   31043 pod_ready.go:104] pod "coredns-668d6bf9bc-kmv7f" is not "Ready", error: <nil>
	I1123 10:15:55.466023   31043 pod_ready.go:94] pod "coredns-668d6bf9bc-kmv7f" is "Ready"
	I1123 10:15:55.466048   31043 pod_ready.go:86] duration metric: took 6.506724502s for pod "coredns-668d6bf9bc-kmv7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:55.469174   31043 pod_ready.go:83] waiting for pod "etcd-test-preload-718211" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:55.473092   31043 pod_ready.go:94] pod "etcd-test-preload-718211" is "Ready"
	I1123 10:15:55.473117   31043 pod_ready.go:86] duration metric: took 3.915694ms for pod "etcd-test-preload-718211" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:55.475267   31043 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-718211" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:55.480279   31043 pod_ready.go:94] pod "kube-apiserver-test-preload-718211" is "Ready"
	I1123 10:15:55.480301   31043 pod_ready.go:86] duration metric: took 5.013905ms for pod "kube-apiserver-test-preload-718211" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:55.482218   31043 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-718211" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:55.663685   31043 pod_ready.go:94] pod "kube-controller-manager-test-preload-718211" is "Ready"
	I1123 10:15:55.663711   31043 pod_ready.go:86] duration metric: took 181.4717ms for pod "kube-controller-manager-test-preload-718211" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:55.863906   31043 pod_ready.go:83] waiting for pod "kube-proxy-4ht8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:56.264280   31043 pod_ready.go:94] pod "kube-proxy-4ht8c" is "Ready"
	I1123 10:15:56.264316   31043 pod_ready.go:86] duration metric: took 400.377862ms for pod "kube-proxy-4ht8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:56.464348   31043 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-718211" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:57.263721   31043 pod_ready.go:94] pod "kube-scheduler-test-preload-718211" is "Ready"
	I1123 10:15:57.263753   31043 pod_ready.go:86] duration metric: took 799.371463ms for pod "kube-scheduler-test-preload-718211" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:15:57.263765   31043 pod_ready.go:40] duration metric: took 8.329249172s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:15:57.304989   31043 start.go:625] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1123 10:15:57.306372   31043 out.go:203] 
	W1123 10:15:57.307395   31043 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1123 10:15:57.308405   31043 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:15:57.309557   31043 out.go:179] * Done! kubectl is now configured to use "test-preload-718211" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.030993246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763892958030969614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad9d0a08-b401-49aa-b75a-f7a8c869c496 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.031937778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d0f0f95-ce0d-41f7-82c3-7fa3d9376cf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.032005202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d0f0f95-ce0d-41f7-82c3-7fa3d9376cf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.032168324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82ccabe9f65dd0a228abe94efb0dd0e20fe30e54a870c6dad71965544f6b634,PodSandboxId:816b22ef1f82478b5387e78ffa6a959a8d481026cc3547c4c38a10123304da43,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763892950081769677,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kmv7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fec76645-d2d7-4d67-ab44-5fcb2024a33f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8c00f8d8306f38a1da38c9b65b3a54fd7024bf1cf182769e3d473cc376bdc,PodSandboxId:3c2184d84c4512f2d1fba803f51bed9049502d4f0f9985cdb262c2b8f42b87c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763892946618980112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ht8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ce2e77f2-6683-4646-b578-6216c86593fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3595323b3c3615ab5268d1b448d15e16bc16871d30d3df2b50cff2d2ce80c83c,PodSandboxId:5e4d145bf45488f788cbdd99c3e22c1085d2266dab00eeefe87fd8d253b990b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763892946632985958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1
456787-8da1-469e-9584-dc480344f163,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8053d859e035f14082d10daa86a904a88b5bd039b91ef50068b435ef9881ef,PodSandboxId:a249685008fa9e78a5673e9c4d9457b9e3f0416cf92f7de7d66ae231b634fb53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763892942427005131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e4f9c8eb
8a88d86f4a836863801feb8,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc6fe3f4c2f4ff8d7908e7dc676c70f18ecebd750a49627bd2afe48a865c72,PodSandboxId:1499b59c41ce484155792166490177c52e9da621a6ba25a20c0613bae23bf9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763892942404582147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5947e44df60c9199da5b55ca3922cf,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa6e61d5129e41aeaa1f7c54f709afa6319ab2bf85d158d2b3d32ef04c65547,PodSandboxId:f8fdf745ad4fc42cbad082707f62d1a2cd07db4aa246c0e1f726072941a744cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763892942390129912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfae098e8f8893f676d879c9cee58009,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e92cb1ce05648957ac77d448611e7e0f42f368531c4a927fa3cbb3c7e9c2795,PodSandboxId:00d07aa7b0eaf4edcf10dca86cf0906a12eda82c603f59aed24907a69061e081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763892942378739616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2734cf3c779d3ef83fef9aea5b3edc8,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d0f0f95-ce0d-41f7-82c3-7fa3d9376cf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.066039206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43029043-6c0f-4808-9bbd-62170d90df88 name=/runtime.v1.RuntimeService/Version
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.066104855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43029043-6c0f-4808-9bbd-62170d90df88 name=/runtime.v1.RuntimeService/Version
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.067591562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5416588c-53b6-4606-9448-9da56fb595d4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.068043731Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763892958068019291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5416588c-53b6-4606-9448-9da56fb595d4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.068979215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1434431b-fb2d-433c-b844-3494b57d2e95 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.069047956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1434431b-fb2d-433c-b844-3494b57d2e95 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.069208221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82ccabe9f65dd0a228abe94efb0dd0e20fe30e54a870c6dad71965544f6b634,PodSandboxId:816b22ef1f82478b5387e78ffa6a959a8d481026cc3547c4c38a10123304da43,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763892950081769677,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kmv7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fec76645-d2d7-4d67-ab44-5fcb2024a33f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8c00f8d8306f38a1da38c9b65b3a54fd7024bf1cf182769e3d473cc376bdc,PodSandboxId:3c2184d84c4512f2d1fba803f51bed9049502d4f0f9985cdb262c2b8f42b87c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763892946618980112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ht8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ce2e77f2-6683-4646-b578-6216c86593fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3595323b3c3615ab5268d1b448d15e16bc16871d30d3df2b50cff2d2ce80c83c,PodSandboxId:5e4d145bf45488f788cbdd99c3e22c1085d2266dab00eeefe87fd8d253b990b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763892946632985958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1
456787-8da1-469e-9584-dc480344f163,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8053d859e035f14082d10daa86a904a88b5bd039b91ef50068b435ef9881ef,PodSandboxId:a249685008fa9e78a5673e9c4d9457b9e3f0416cf92f7de7d66ae231b634fb53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763892942427005131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e4f9c8eb
8a88d86f4a836863801feb8,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc6fe3f4c2f4ff8d7908e7dc676c70f18ecebd750a49627bd2afe48a865c72,PodSandboxId:1499b59c41ce484155792166490177c52e9da621a6ba25a20c0613bae23bf9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763892942404582147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5947e44df60c9199da5b55ca3922cf,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa6e61d5129e41aeaa1f7c54f709afa6319ab2bf85d158d2b3d32ef04c65547,PodSandboxId:f8fdf745ad4fc42cbad082707f62d1a2cd07db4aa246c0e1f726072941a744cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763892942390129912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfae098e8f8893f676d879c9cee58009,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e92cb1ce05648957ac77d448611e7e0f42f368531c4a927fa3cbb3c7e9c2795,PodSandboxId:00d07aa7b0eaf4edcf10dca86cf0906a12eda82c603f59aed24907a69061e081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763892942378739616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2734cf3c779d3ef83fef9aea5b3edc8,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1434431b-fb2d-433c-b844-3494b57d2e95 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.104565888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87a89d97-8161-411c-92ed-88536fd5b75f name=/runtime.v1.RuntimeService/Version
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.104655654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87a89d97-8161-411c-92ed-88536fd5b75f name=/runtime.v1.RuntimeService/Version
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.106175921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a20d1c4a-21b7-4fd0-af17-cffd35c0a915 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.106803591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763892958106779065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a20d1c4a-21b7-4fd0-af17-cffd35c0a915 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.107764697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bc8abe1-33c5-427a-ad44-d90bd22ebc5a name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.108014490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bc8abe1-33c5-427a-ad44-d90bd22ebc5a name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.108462381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82ccabe9f65dd0a228abe94efb0dd0e20fe30e54a870c6dad71965544f6b634,PodSandboxId:816b22ef1f82478b5387e78ffa6a959a8d481026cc3547c4c38a10123304da43,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763892950081769677,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kmv7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fec76645-d2d7-4d67-ab44-5fcb2024a33f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8c00f8d8306f38a1da38c9b65b3a54fd7024bf1cf182769e3d473cc376bdc,PodSandboxId:3c2184d84c4512f2d1fba803f51bed9049502d4f0f9985cdb262c2b8f42b87c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763892946618980112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ht8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ce2e77f2-6683-4646-b578-6216c86593fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3595323b3c3615ab5268d1b448d15e16bc16871d30d3df2b50cff2d2ce80c83c,PodSandboxId:5e4d145bf45488f788cbdd99c3e22c1085d2266dab00eeefe87fd8d253b990b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763892946632985958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1
456787-8da1-469e-9584-dc480344f163,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8053d859e035f14082d10daa86a904a88b5bd039b91ef50068b435ef9881ef,PodSandboxId:a249685008fa9e78a5673e9c4d9457b9e3f0416cf92f7de7d66ae231b634fb53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763892942427005131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e4f9c8eb
8a88d86f4a836863801feb8,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc6fe3f4c2f4ff8d7908e7dc676c70f18ecebd750a49627bd2afe48a865c72,PodSandboxId:1499b59c41ce484155792166490177c52e9da621a6ba25a20c0613bae23bf9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763892942404582147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5947e44df60c9199da5b55ca3922cf,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa6e61d5129e41aeaa1f7c54f709afa6319ab2bf85d158d2b3d32ef04c65547,PodSandboxId:f8fdf745ad4fc42cbad082707f62d1a2cd07db4aa246c0e1f726072941a744cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763892942390129912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfae098e8f8893f676d879c9cee58009,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e92cb1ce05648957ac77d448611e7e0f42f368531c4a927fa3cbb3c7e9c2795,PodSandboxId:00d07aa7b0eaf4edcf10dca86cf0906a12eda82c603f59aed24907a69061e081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763892942378739616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2734cf3c779d3ef83fef9aea5b3edc8,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bc8abe1-33c5-427a-ad44-d90bd22ebc5a name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.137977431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cce739dd-ada6-4298-a749-0b7370b18e8c name=/runtime.v1.RuntimeService/Version
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.138315614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cce739dd-ada6-4298-a749-0b7370b18e8c name=/runtime.v1.RuntimeService/Version
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.140093827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=549298b1-1ffa-47c6-a019-624b0d2e09cd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.140743974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763892958140722627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=549298b1-1ffa-47c6-a019-624b0d2e09cd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.141735939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35c8fb2d-606e-4c3e-b9a1-a6fafb7fb488 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.141831621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35c8fb2d-606e-4c3e-b9a1-a6fafb7fb488 name=/runtime.v1.RuntimeService/ListContainers
	Nov 23 10:15:58 test-preload-718211 crio[828]: time="2025-11-23 10:15:58.142006464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a82ccabe9f65dd0a228abe94efb0dd0e20fe30e54a870c6dad71965544f6b634,PodSandboxId:816b22ef1f82478b5387e78ffa6a959a8d481026cc3547c4c38a10123304da43,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763892950081769677,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kmv7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fec76645-d2d7-4d67-ab44-5fcb2024a33f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8c00f8d8306f38a1da38c9b65b3a54fd7024bf1cf182769e3d473cc376bdc,PodSandboxId:3c2184d84c4512f2d1fba803f51bed9049502d4f0f9985cdb262c2b8f42b87c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763892946618980112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ht8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ce2e77f2-6683-4646-b578-6216c86593fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3595323b3c3615ab5268d1b448d15e16bc16871d30d3df2b50cff2d2ce80c83c,PodSandboxId:5e4d145bf45488f788cbdd99c3e22c1085d2266dab00eeefe87fd8d253b990b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763892946632985958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1
456787-8da1-469e-9584-dc480344f163,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8053d859e035f14082d10daa86a904a88b5bd039b91ef50068b435ef9881ef,PodSandboxId:a249685008fa9e78a5673e9c4d9457b9e3f0416cf92f7de7d66ae231b634fb53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763892942427005131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e4f9c8eb
8a88d86f4a836863801feb8,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc6fe3f4c2f4ff8d7908e7dc676c70f18ecebd750a49627bd2afe48a865c72,PodSandboxId:1499b59c41ce484155792166490177c52e9da621a6ba25a20c0613bae23bf9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763892942404582147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5947e44df60c9199da5b55ca3922cf,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa6e61d5129e41aeaa1f7c54f709afa6319ab2bf85d158d2b3d32ef04c65547,PodSandboxId:f8fdf745ad4fc42cbad082707f62d1a2cd07db4aa246c0e1f726072941a744cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763892942390129912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfae098e8f8893f676d879c9cee58009,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e92cb1ce05648957ac77d448611e7e0f42f368531c4a927fa3cbb3c7e9c2795,PodSandboxId:00d07aa7b0eaf4edcf10dca86cf0906a12eda82c603f59aed24907a69061e081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763892942378739616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-718211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2734cf3c779d3ef83fef9aea5b3edc8,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35c8fb2d-606e-4c3e-b9a1-a6fafb7fb488 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	a82ccabe9f65d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   816b22ef1f824       coredns-668d6bf9bc-kmv7f                      kube-system
	3595323b3c361       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       2                   5e4d145bf4548       storage-provisioner                           kube-system
	f9a8c00f8d830       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   11 seconds ago      Running             kube-proxy                1                   3c2184d84c451       kube-proxy-4ht8c                              kube-system
	0e8053d859e03       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   15 seconds ago      Running             kube-scheduler            1                   a249685008fa9       kube-scheduler-test-preload-718211            kube-system
	defc6fe3f4c2f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   15 seconds ago      Running             etcd                      1                   1499b59c41ce4       etcd-test-preload-718211                      kube-system
	9fa6e61d5129e       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   15 seconds ago      Running             kube-controller-manager   1                   f8fdf745ad4fc       kube-controller-manager-test-preload-718211   kube-system
	3e92cb1ce0564       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   15 seconds ago      Running             kube-apiserver            1                   00d07aa7b0eaf       kube-apiserver-test-preload-718211            kube-system
	
	
	==> coredns [a82ccabe9f65dd0a228abe94efb0dd0e20fe30e54a870c6dad71965544f6b634] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59013 - 21589 "HINFO IN 6175260224322165819.9005514033263160017. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01625779s
	
	
	==> describe nodes <==
	Name:               test-preload-718211
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-718211
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=test-preload-718211
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_13_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:13:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-718211
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:15:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:15:47 +0000   Sun, 23 Nov 2025 10:13:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:15:47 +0000   Sun, 23 Nov 2025 10:13:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:15:47 +0000   Sun, 23 Nov 2025 10:13:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:15:47 +0000   Sun, 23 Nov 2025 10:15:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    test-preload-718211
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 a99361351e0d448cb623201412d92ab6
	  System UUID:                a9936135-1e0d-448c-b623-201412d92ab6
	  Boot ID:                    c52c361c-b74f-4a80-ae2e-6b25ab9a84e1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-kmv7f                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     115s
	  kube-system                 etcd-test-preload-718211                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m
	  kube-system                 kube-apiserver-test-preload-718211             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-test-preload-718211    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-4ht8c                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-test-preload-718211             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 113s                 kube-proxy       
	  Normal   Starting                 11s                  kube-proxy       
	  Normal   Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node test-preload-718211 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node test-preload-718211 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node test-preload-718211 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node test-preload-718211 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node test-preload-718211 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node test-preload-718211 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   NodeReady                119s                 kubelet          Node test-preload-718211 status is now: NodeReady
	  Normal   RegisteredNode           116s                 node-controller  Node test-preload-718211 event: Registered Node test-preload-718211 in Controller
	  Normal   Starting                 18s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)    kubelet          Node test-preload-718211 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)    kubelet          Node test-preload-718211 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x7 over 18s)    kubelet          Node test-preload-718211 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 13s                  kubelet          Node test-preload-718211 has been rebooted, boot id: c52c361c-b74f-4a80-ae2e-6b25ab9a84e1
	  Normal   RegisteredNode           10s                  node-controller  Node test-preload-718211 event: Registered Node test-preload-718211 in Controller
	
	
	==> dmesg <==
	[Nov23 10:15] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001425] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000743] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.000450] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.105236] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.580835] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.202781] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [defc6fe3f4c2f4ff8d7908e7dc676c70f18ecebd750a49627bd2afe48a865c72] <==
	{"level":"info","ts":"2025-11-23T10:15:42.855441Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"73eff271b33bb37a","local-member-id":"6b385368e7357343","added-peer-id":"6b385368e7357343","added-peer-peer-urls":["https://192.168.39.170:2380"]}
	{"level":"info","ts":"2025-11-23T10:15:42.855664Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"73eff271b33bb37a","local-member-id":"6b385368e7357343","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:15:42.856552Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:15:42.840820Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T10:15:42.857345Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T10:15:42.857359Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T10:15:42.853524Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:15:42.853565Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2025-11-23T10:15:42.861446Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2025-11-23T10:15:44.503673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T10:15:44.503732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:15:44.503767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 received MsgPreVoteResp from 6b385368e7357343 at term 2"}
	{"level":"info","ts":"2025-11-23T10:15:44.503779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T10:15:44.503793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 received MsgVoteResp from 6b385368e7357343 at term 3"}
	{"level":"info","ts":"2025-11-23T10:15:44.503801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T10:15:44.503808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b385368e7357343 elected leader 6b385368e7357343 at term 3"}
	{"level":"info","ts":"2025-11-23T10:15:44.506324Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"6b385368e7357343","local-member-attributes":"{Name:test-preload-718211 ClientURLs:[https://192.168.39.170:2379]}","request-path":"/0/members/6b385368e7357343/attributes","cluster-id":"73eff271b33bb37a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:15:44.506671Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:15:44.507055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:15:44.507804Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-23T10:15:44.509165Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.170:2379"}
	{"level":"info","ts":"2025-11-23T10:15:44.507856Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:15:44.508100Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-23T10:15:44.510311Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T10:15:44.509851Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:15:58 up 0 min,  0 users,  load average: 0.77, 0.22, 0.07
	Linux test-preload-718211 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3e92cb1ce05648957ac77d448611e7e0f42f368531c4a927fa3cbb3c7e9c2795] <==
	I1123 10:15:45.686937       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:15:45.695209       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:15:45.697597       1 shared_informer.go:320] Caches are synced for configmaps
	I1123 10:15:45.697661       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1123 10:15:45.695910       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:15:45.697852       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:15:45.697870       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:15:45.697885       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:15:45.697899       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:15:45.695967       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1123 10:15:45.695991       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:15:45.698050       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:15:45.698769       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1123 10:15:45.742479       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1123 10:15:45.742571       1 policy_source.go:240] refreshing policies
	I1123 10:15:45.758428       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:15:46.266317       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1123 10:15:46.589843       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:15:47.473335       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1123 10:15:47.505334       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1123 10:15:47.535098       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:15:47.541772       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:15:48.997446       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:15:49.200225       1 controller.go:615] quota admission added evaluator for: endpoints
	I1123 10:15:49.247812       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9fa6e61d5129e41aeaa1f7c54f709afa6319ab2bf85d158d2b3d32ef04c65547] <==
	I1123 10:15:48.947411       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1123 10:15:48.951790       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1123 10:15:48.954316       1 shared_informer.go:320] Caches are synced for resource quota
	I1123 10:15:48.959678       1 shared_informer.go:320] Caches are synced for disruption
	I1123 10:15:48.962939       1 shared_informer.go:320] Caches are synced for stateful set
	I1123 10:15:48.969281       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1123 10:15:48.969322       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1123 10:15:48.969346       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1123 10:15:48.969369       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1123 10:15:48.972676       1 shared_informer.go:320] Caches are synced for HPA
	I1123 10:15:48.975068       1 shared_informer.go:320] Caches are synced for garbage collector
	I1123 10:15:48.978312       1 shared_informer.go:320] Caches are synced for PVC protection
	I1123 10:15:48.981624       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1123 10:15:48.984035       1 shared_informer.go:320] Caches are synced for daemon sets
	I1123 10:15:48.989871       1 shared_informer.go:320] Caches are synced for crt configmap
	I1123 10:15:48.994343       1 shared_informer.go:320] Caches are synced for deployment
	I1123 10:15:48.995614       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1123 10:15:48.999265       1 shared_informer.go:320] Caches are synced for persistent volume
	I1123 10:15:49.007156       1 shared_informer.go:320] Caches are synced for resource quota
	I1123 10:15:49.008263       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1123 10:15:49.255433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="307.957551ms"
	I1123 10:15:49.256134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.314µs"
	I1123 10:15:50.359624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="138.754µs"
	I1123 10:15:55.114227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.265121ms"
	I1123 10:15:55.115301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="836.997µs"
	
	
	==> kube-proxy [f9a8c00f8d8306f38a1da38c9b65b3a54fd7024bf1cf182769e3d473cc376bdc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1123 10:15:47.004196       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1123 10:15:47.022099       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	E1123 10:15:47.022284       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:15:47.061638       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1123 10:15:47.061710       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1123 10:15:47.061743       1 server_linux.go:170] "Using iptables Proxier"
	I1123 10:15:47.064996       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:15:47.065905       1 server.go:497] "Version info" version="v1.32.0"
	I1123 10:15:47.065935       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:15:47.069111       1 config.go:199] "Starting service config controller"
	I1123 10:15:47.069159       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1123 10:15:47.069185       1 config.go:105] "Starting endpoint slice config controller"
	I1123 10:15:47.069189       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1123 10:15:47.069883       1 config.go:329] "Starting node config controller"
	I1123 10:15:47.069969       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1123 10:15:47.170009       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1123 10:15:47.170094       1 shared_informer.go:320] Caches are synced for node config
	I1123 10:15:47.170105       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0e8053d859e035f14082d10daa86a904a88b5bd039b91ef50068b435ef9881ef] <==
	I1123 10:15:43.561442       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:15:45.648756       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:15:45.650570       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:15:45.650625       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:15:45.650653       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:15:45.681967       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1123 10:15:45.682068       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:15:45.684663       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:15:45.684704       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 10:15:45.685062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1123 10:15:45.685412       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:15:45.785173       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:15:45 test-preload-718211 kubelet[1159]: I1123 10:15:45.829253    1159 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-718211"
	Nov 23 10:15:45 test-preload-718211 kubelet[1159]: I1123 10:15:45.830033    1159 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:15:45 test-preload-718211 kubelet[1159]: I1123 10:15:45.831597    1159 setters.go:602] "Node became not ready" node="test-preload-718211" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T10:15:45Z","lastTransitionTime":"2025-11-23T10:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 23 10:15:45 test-preload-718211 kubelet[1159]: E1123 10:15:45.844207    1159 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-718211\" already exists" pod="kube-system/kube-controller-manager-test-preload-718211"
	Nov 23 10:15:45 test-preload-718211 kubelet[1159]: I1123 10:15:45.844346    1159 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-718211"
	Nov 23 10:15:45 test-preload-718211 kubelet[1159]: E1123 10:15:45.858273    1159 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-718211\" already exists" pod="kube-system/kube-scheduler-test-preload-718211"
	Nov 23 10:15:45 test-preload-718211 kubelet[1159]: I1123 10:15:45.858380    1159 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-718211"
	Nov 23 10:15:45 test-preload-718211 kubelet[1159]: E1123 10:15:45.866395    1159 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-718211\" already exists" pod="kube-system/etcd-test-preload-718211"
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: I1123 10:15:46.175018    1159 apiserver.go:52] "Watching apiserver"
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: E1123 10:15:46.179198    1159 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-kmv7f" podUID="fec76645-d2d7-4d67-ab44-5fcb2024a33f"
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: I1123 10:15:46.192850    1159 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: I1123 10:15:46.259189    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce2e77f2-6683-4646-b578-6216c86593fa-xtables-lock\") pod \"kube-proxy-4ht8c\" (UID: \"ce2e77f2-6683-4646-b578-6216c86593fa\") " pod="kube-system/kube-proxy-4ht8c"
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: I1123 10:15:46.259258    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a1456787-8da1-469e-9584-dc480344f163-tmp\") pod \"storage-provisioner\" (UID: \"a1456787-8da1-469e-9584-dc480344f163\") " pod="kube-system/storage-provisioner"
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: I1123 10:15:46.259353    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce2e77f2-6683-4646-b578-6216c86593fa-lib-modules\") pod \"kube-proxy-4ht8c\" (UID: \"ce2e77f2-6683-4646-b578-6216c86593fa\") " pod="kube-system/kube-proxy-4ht8c"
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: E1123 10:15:46.260224    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: E1123 10:15:46.261752    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fec76645-d2d7-4d67-ab44-5fcb2024a33f-config-volume podName:fec76645-d2d7-4d67-ab44-5fcb2024a33f nodeName:}" failed. No retries permitted until 2025-11-23 10:15:46.760966504 +0000 UTC m=+6.695850274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fec76645-d2d7-4d67-ab44-5fcb2024a33f-config-volume") pod "coredns-668d6bf9bc-kmv7f" (UID: "fec76645-d2d7-4d67-ab44-5fcb2024a33f") : object "kube-system"/"coredns" not registered
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: E1123 10:15:46.764204    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 10:15:46 test-preload-718211 kubelet[1159]: E1123 10:15:46.764295    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fec76645-d2d7-4d67-ab44-5fcb2024a33f-config-volume podName:fec76645-d2d7-4d67-ab44-5fcb2024a33f nodeName:}" failed. No retries permitted until 2025-11-23 10:15:47.764279538 +0000 UTC m=+7.699163306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fec76645-d2d7-4d67-ab44-5fcb2024a33f-config-volume") pod "coredns-668d6bf9bc-kmv7f" (UID: "fec76645-d2d7-4d67-ab44-5fcb2024a33f") : object "kube-system"/"coredns" not registered
	Nov 23 10:15:47 test-preload-718211 kubelet[1159]: E1123 10:15:47.249644    1159 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-kmv7f" podUID="fec76645-d2d7-4d67-ab44-5fcb2024a33f"
	Nov 23 10:15:47 test-preload-718211 kubelet[1159]: I1123 10:15:47.737290    1159 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 23 10:15:47 test-preload-718211 kubelet[1159]: E1123 10:15:47.772039    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 10:15:47 test-preload-718211 kubelet[1159]: E1123 10:15:47.772113    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fec76645-d2d7-4d67-ab44-5fcb2024a33f-config-volume podName:fec76645-d2d7-4d67-ab44-5fcb2024a33f nodeName:}" failed. No retries permitted until 2025-11-23 10:15:49.772100166 +0000 UTC m=+9.706983945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fec76645-d2d7-4d67-ab44-5fcb2024a33f-config-volume") pod "coredns-668d6bf9bc-kmv7f" (UID: "fec76645-d2d7-4d67-ab44-5fcb2024a33f") : object "kube-system"/"coredns" not registered
	Nov 23 10:15:50 test-preload-718211 kubelet[1159]: E1123 10:15:50.259271    1159 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763892950258965569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 23 10:15:50 test-preload-718211 kubelet[1159]: E1123 10:15:50.259296    1159 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763892950258965569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 23 10:15:55 test-preload-718211 kubelet[1159]: I1123 10:15:55.073751    1159 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [3595323b3c3615ab5268d1b448d15e16bc16871d30d3df2b50cff2d2ce80c83c] <==
	I1123 10:15:46.804191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-718211 -n test-preload-718211
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-718211 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-718211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-718211
--- FAIL: TestPreload (175.47s)

                                                
                                    

Test pass (309/351)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 32.64
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 18.19
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
22 TestOffline 105.02
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 207.14
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 14.52
35 TestAddons/parallel/Registry 23
36 TestAddons/parallel/RegistryCreds 0.68
38 TestAddons/parallel/InspektorGadget 11.92
39 TestAddons/parallel/MetricsServer 6.57
41 TestAddons/parallel/CSI 56.92
42 TestAddons/parallel/Headlamp 21.66
43 TestAddons/parallel/CloudSpanner 5.63
44 TestAddons/parallel/LocalPath 58.02
45 TestAddons/parallel/NvidiaDevicePlugin 6.85
46 TestAddons/parallel/Yakd 12.48
48 TestAddons/StoppedEnableDisable 84.97
49 TestCertOptions 43.73
50 TestCertExpiration 305.81
52 TestForceSystemdFlag 67.42
53 TestForceSystemdEnv 44.02
58 TestErrorSpam/setup 36.34
59 TestErrorSpam/start 0.31
60 TestErrorSpam/status 0.65
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.79
63 TestErrorSpam/stop 4.93
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 85.71
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 32.16
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
75 TestFunctional/serial/CacheCmd/cache/add_local 2.65
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 36.08
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.4
86 TestFunctional/serial/LogsFileCmd 1.39
87 TestFunctional/serial/InvalidService 4.65
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 28.01
91 TestFunctional/parallel/DryRun 0.24
92 TestFunctional/parallel/InternationalLanguage 0.11
93 TestFunctional/parallel/StatusCmd 0.81
97 TestFunctional/parallel/ServiceCmdConnect 11.42
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 53.51
101 TestFunctional/parallel/SSHCmd 0.32
102 TestFunctional/parallel/CpCmd 0.98
103 TestFunctional/parallel/MySQL 32.98
104 TestFunctional/parallel/FileSync 0.18
105 TestFunctional/parallel/CertSync 1.29
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
113 TestFunctional/parallel/License 0.72
123 TestFunctional/parallel/ServiceCmd/DeployApp 11.17
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
125 TestFunctional/parallel/ProfileCmd/profile_list 0.31
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
127 TestFunctional/parallel/MountCmd/any-port 13.99
128 TestFunctional/parallel/ServiceCmd/List 0.24
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
131 TestFunctional/parallel/ServiceCmd/Format 0.27
132 TestFunctional/parallel/ServiceCmd/URL 0.25
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.52
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
139 TestFunctional/parallel/ImageCommands/ImageBuild 7.47
140 TestFunctional/parallel/ImageCommands/Setup 2.38
141 TestFunctional/parallel/MountCmd/specific-port 1.57
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.93
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.97
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.62
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.78
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 214.89
161 TestMultiControlPlane/serial/DeployApp 10.93
162 TestMultiControlPlane/serial/PingHostFromPods 1.3
163 TestMultiControlPlane/serial/AddWorkerNode 46.25
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
166 TestMultiControlPlane/serial/CopyFile 10.65
167 TestMultiControlPlane/serial/StopSecondaryNode 89.01
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
169 TestMultiControlPlane/serial/RestartSecondaryNode 40.95
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.04
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 392.63
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.35
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
174 TestMultiControlPlane/serial/StopCluster 256.67
175 TestMultiControlPlane/serial/RestartCluster 104.16
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.5
177 TestMultiControlPlane/serial/AddSecondaryNode 82.7
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
183 TestJSONOutput/start/Command 75.93
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.72
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.62
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.15
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 78.09
215 TestMountStart/serial/StartWithMountFirst 23.08
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 19.79
218 TestMountStart/serial/VerifyMountSecond 0.29
219 TestMountStart/serial/DeleteFirst 0.68
220 TestMountStart/serial/VerifyMountPostDelete 0.29
221 TestMountStart/serial/Stop 1.29
222 TestMountStart/serial/RestartStopped 22.23
223 TestMountStart/serial/VerifyMountPostStop 0.3
226 TestMultiNode/serial/FreshStart2Nodes 101.43
227 TestMultiNode/serial/DeployApp2Nodes 9.34
228 TestMultiNode/serial/PingHostFrom2Pods 0.85
229 TestMultiNode/serial/AddNode 42.48
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.45
232 TestMultiNode/serial/CopyFile 5.86
233 TestMultiNode/serial/StopNode 2.38
234 TestMultiNode/serial/StartAfterStop 41.79
235 TestMultiNode/serial/RestartKeepsNodes 316.86
236 TestMultiNode/serial/DeleteNode 2.56
237 TestMultiNode/serial/StopMultiNode 174.19
238 TestMultiNode/serial/RestartMultiNode 120.49
239 TestMultiNode/serial/ValidateNameConflict 42.95
246 TestScheduledStopUnix 108.52
250 TestRunningBinaryUpgrade 132.75
252 TestKubernetesUpgrade 528.9
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 86.61
264 TestNetworkPlugins/group/false 3.41
268 TestISOImage/Setup 74.83
270 TestISOImage/Binaries/crictl 0.19
271 TestISOImage/Binaries/curl 0.18
272 TestISOImage/Binaries/docker 0.18
273 TestISOImage/Binaries/git 0.19
274 TestISOImage/Binaries/iptables 0.18
275 TestISOImage/Binaries/podman 0.19
276 TestISOImage/Binaries/rsync 0.19
277 TestISOImage/Binaries/socat 0.19
278 TestISOImage/Binaries/wget 0.18
279 TestISOImage/Binaries/VBoxControl 0.17
280 TestISOImage/Binaries/VBoxService 0.19
281 TestNoKubernetes/serial/StartWithStopK8s 47.62
282 TestStoppedBinaryUpgrade/Setup 3.73
283 TestStoppedBinaryUpgrade/Upgrade 119.48
284 TestNoKubernetes/serial/Start 45.17
285 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
287 TestNoKubernetes/serial/ProfileList 5.96
288 TestNoKubernetes/serial/Stop 1.3
289 TestNoKubernetes/serial/StartNoArgs 64.6
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
300 TestPause/serial/Start 107.99
301 TestNetworkPlugins/group/auto/Start 53.82
302 TestPause/serial/SecondStartNoReconfiguration 35.67
303 TestNetworkPlugins/group/auto/KubeletFlags 0.17
304 TestNetworkPlugins/group/auto/NetCatPod 11.24
305 TestPause/serial/Pause 0.76
306 TestPause/serial/VerifyStatus 0.24
307 TestPause/serial/Unpause 0.71
308 TestPause/serial/PauseAgain 0.9
309 TestPause/serial/DeletePaused 0.9
310 TestPause/serial/VerifyDeletedResources 15.31
311 TestNetworkPlugins/group/auto/DNS 0.14
312 TestNetworkPlugins/group/auto/Localhost 0.11
313 TestNetworkPlugins/group/auto/HairPin 0.12
314 TestNetworkPlugins/group/kindnet/Start 60.53
315 TestNetworkPlugins/group/calico/Start 89.31
316 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
317 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
318 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
319 TestNetworkPlugins/group/custom-flannel/Start 74.51
320 TestNetworkPlugins/group/kindnet/DNS 0.25
321 TestNetworkPlugins/group/kindnet/Localhost 0.2
322 TestNetworkPlugins/group/kindnet/HairPin 0.17
323 TestNetworkPlugins/group/enable-default-cni/Start 80.69
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/calico/KubeletFlags 0.17
326 TestNetworkPlugins/group/calico/NetCatPod 12.27
327 TestNetworkPlugins/group/calico/DNS 0.22
328 TestNetworkPlugins/group/calico/Localhost 0.15
329 TestNetworkPlugins/group/calico/HairPin 0.14
330 TestNetworkPlugins/group/flannel/Start 75.16
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
332 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
333 TestNetworkPlugins/group/custom-flannel/DNS 0.2
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
336 TestNetworkPlugins/group/bridge/Start 87.12
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
343 TestStartStop/group/old-k8s-version/serial/FirstStart 95.84
345 TestStartStop/group/no-preload/serial/FirstStart 124.91
346 TestNetworkPlugins/group/flannel/ControllerPod 6.01
347 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
348 TestNetworkPlugins/group/flannel/NetCatPod 10.27
349 TestNetworkPlugins/group/flannel/DNS 0.19
350 TestNetworkPlugins/group/flannel/Localhost 0.22
351 TestNetworkPlugins/group/flannel/HairPin 0.16
353 TestStartStop/group/embed-certs/serial/FirstStart 88.8
354 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
355 TestNetworkPlugins/group/bridge/NetCatPod 11.29
356 TestNetworkPlugins/group/bridge/DNS 0.24
357 TestNetworkPlugins/group/bridge/Localhost 0.24
358 TestNetworkPlugins/group/bridge/HairPin 0.16
360 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.39
361 TestStartStop/group/old-k8s-version/serial/DeployApp 14.35
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
363 TestStartStop/group/old-k8s-version/serial/Stop 86.84
364 TestStartStop/group/no-preload/serial/DeployApp 14.3
365 TestStartStop/group/embed-certs/serial/DeployApp 15.28
366 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
367 TestStartStop/group/no-preload/serial/Stop 87.96
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
369 TestStartStop/group/embed-certs/serial/Stop 72.73
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.27
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
372 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.03
373 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
374 TestStartStop/group/old-k8s-version/serial/SecondStart 46.75
375 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
376 TestStartStop/group/embed-certs/serial/SecondStart 49.25
377 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
378 TestStartStop/group/no-preload/serial/SecondStart 68.73
379 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 19.01
380 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.01
382 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
383 TestStartStop/group/old-k8s-version/serial/Pause 3.01
385 TestStartStop/group/newest-cni/serial/FirstStart 50.61
386 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
387 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 67.54
388 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
390 TestStartStop/group/embed-certs/serial/Pause 2.6
392 TestISOImage/PersistentMounts//data 0.17
393 TestISOImage/PersistentMounts//var/lib/docker 0.17
394 TestISOImage/PersistentMounts//var/lib/cni 0.18
395 TestISOImage/PersistentMounts//var/lib/kubelet 0.18
396 TestISOImage/PersistentMounts//var/lib/minikube 0.16
397 TestISOImage/PersistentMounts//var/lib/toolbox 0.17
398 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
399 TestISOImage/VersionJSON 0.17
400 TestISOImage/eBPFSupport 0.17
401 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.12
402 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
403 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
404 TestStartStop/group/no-preload/serial/Pause 3.24
405 TestStartStop/group/newest-cni/serial/DeployApp 0
406 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
407 TestStartStop/group/newest-cni/serial/Stop 10.76
408 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
409 TestStartStop/group/newest-cni/serial/SecondStart 36.69
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.21
414 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
417 TestStartStop/group/newest-cni/serial/Pause 3.15
x
+
TestDownloadOnly/v1.28.0/json-events (32.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-276667 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-276667 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (32.635203434s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (32.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 09:20:39.217986    7590 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1123 09:20:39.218083    7590 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-276667
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-276667: exit status 85 (72.073497ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-276667 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-276667 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:20:06
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:20:06.635694    7602 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:20:06.635923    7602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:20:06.635933    7602 out.go:374] Setting ErrFile to fd 2...
	I1123 09:20:06.635946    7602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:20:06.636160    7602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	W1123 09:20:06.636319    7602 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21968-3638/.minikube/config/config.json: open /home/jenkins/minikube-integration/21968-3638/.minikube/config/config.json: no such file or directory
	I1123 09:20:06.636829    7602 out.go:368] Setting JSON to true
	I1123 09:20:06.637677    7602 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":145,"bootTime":1763889462,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:20:06.637731    7602 start.go:143] virtualization: kvm guest
	I1123 09:20:06.641756    7602 out.go:99] [download-only-276667] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1123 09:20:06.641871    7602 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 09:20:06.641908    7602 notify.go:221] Checking for updates...
	I1123 09:20:06.643243    7602 out.go:171] MINIKUBE_LOCATION=21968
	I1123 09:20:06.644515    7602 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:20:06.645757    7602 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 09:20:06.647081    7602 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 09:20:06.648224    7602 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 09:20:06.650238    7602 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 09:20:06.650465    7602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:20:07.135033    7602 out.go:99] Using the kvm2 driver based on user configuration
	I1123 09:20:07.135059    7602 start.go:309] selected driver: kvm2
	I1123 09:20:07.135071    7602 start.go:927] validating driver "kvm2" against <nil>
	I1123 09:20:07.135402    7602 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:20:07.135909    7602 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1123 09:20:07.136081    7602 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 09:20:07.136122    7602 cni.go:84] Creating CNI manager for ""
	I1123 09:20:07.136171    7602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 09:20:07.136179    7602 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1123 09:20:07.136244    7602 start.go:353] cluster config:
	{Name:download-only-276667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-276667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:20:07.136403    7602 iso.go:125] acquiring lock: {Name:mkda1f2156fa5a41237d44afe14c60be86e641cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:20:07.137835    7602 out.go:99] Downloading VM boot image ...
	I1123 09:20:07.137866    7602 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21968-3638/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1123 09:20:21.389217    7602 out.go:99] Starting "download-only-276667" primary control-plane node in "download-only-276667" cluster
	I1123 09:20:21.389272    7602 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 09:20:21.538880    7602 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 09:20:21.538908    7602 cache.go:65] Caching tarball of preloaded images
	I1123 09:20:21.539133    7602 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 09:20:21.540959    7602 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 09:20:21.540974    7602 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 09:20:22.228742    7602 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1123 09:20:22.228856    7602 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-276667 host does not exist
	  To start a cluster, run: "minikube start -p download-only-276667"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-276667
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (18.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-257281 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-257281 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (18.189302076s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (18.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 09:20:57.770963    7590 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 09:20:57.771013    7590 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-257281
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-257281: exit status 85 (71.134835ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-276667 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-276667 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │ 23 Nov 25 09:20 UTC │
	│ delete  │ -p download-only-276667                                                                                                                                                 │ download-only-276667 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │ 23 Nov 25 09:20 UTC │
	│ start   │ -o=json --download-only -p download-only-257281 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-257281 │ jenkins │ v1.37.0 │ 23 Nov 25 09:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:20:39
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:20:39.631920    7888 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:20:39.632216    7888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:20:39.632227    7888 out.go:374] Setting ErrFile to fd 2...
	I1123 09:20:39.632234    7888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:20:39.632443    7888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 09:20:39.632901    7888 out.go:368] Setting JSON to true
	I1123 09:20:39.633714    7888 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":178,"bootTime":1763889462,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:20:39.633765    7888 start.go:143] virtualization: kvm guest
	I1123 09:20:39.635603    7888 out.go:99] [download-only-257281] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:20:39.635736    7888 notify.go:221] Checking for updates...
	I1123 09:20:39.637077    7888 out.go:171] MINIKUBE_LOCATION=21968
	I1123 09:20:39.638592    7888 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:20:39.640108    7888 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 09:20:39.641293    7888 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 09:20:39.642550    7888 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 09:20:39.644820    7888 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 09:20:39.645041    7888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:20:39.676887    7888 out.go:99] Using the kvm2 driver based on user configuration
	I1123 09:20:39.676918    7888 start.go:309] selected driver: kvm2
	I1123 09:20:39.676927    7888 start.go:927] validating driver "kvm2" against <nil>
	I1123 09:20:39.677262    7888 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:20:39.677724    7888 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1123 09:20:39.677874    7888 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 09:20:39.677901    7888 cni.go:84] Creating CNI manager for ""
	I1123 09:20:39.677974    7888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1123 09:20:39.677987    7888 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1123 09:20:39.678040    7888 start.go:353] cluster config:
	{Name:download-only-257281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-257281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:20:39.678134    7888 iso.go:125] acquiring lock: {Name:mkda1f2156fa5a41237d44afe14c60be86e641cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:20:39.679360    7888 out.go:99] Starting "download-only-257281" primary control-plane node in "download-only-257281" cluster
	I1123 09:20:39.679386    7888 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:20:40.345959    7888 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:20:40.346022    7888 cache.go:65] Caching tarball of preloaded images
	I1123 09:20:40.346267    7888 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:20:40.347952    7888 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 09:20:40.347971    7888 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 09:20:41.031577    7888 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1123 09:20:41.031628    7888 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21968-3638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-257281 host does not exist
	  To start a cluster, run: "minikube start -p download-only-257281"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-257281
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 09:20:58.416309    7590 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-687799 --alsologtostderr --binary-mirror http://127.0.0.1:40155 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-687799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-687799
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (105.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-530151 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-530151 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m44.04756554s)
helpers_test.go:175: Cleaning up "offline-crio-530151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-530151
--- PASS: TestOffline (105.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-894046
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-894046: exit status 85 (63.10539ms)

                                                
                                                
-- stdout --
	* Profile "addons-894046" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-894046"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-894046
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-894046: exit status 85 (64.176636ms)

                                                
                                                
-- stdout --
	* Profile "addons-894046" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-894046"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (207.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-894046 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-894046 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m27.140175575s)
--- PASS: TestAddons/Setup (207.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-894046 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-894046 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (14.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-894046 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-894046 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4d0ad8ee-037a-402f-b939-85865174f054] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4d0ad8ee-037a-402f-b939-85865174f054] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 14.005441759s
addons_test.go:694: (dbg) Run:  kubectl --context addons-894046 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-894046 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-894046 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (14.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.666853ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-sr8qg" [c79ec895-1d09-4632-859b-705ab6ff1179] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011626475s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-qjx29" [f4a60b63-e27f-49b2-a26d-f03d7bff66cd] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003271657s
addons_test.go:392: (dbg) Run:  kubectl --context addons-894046 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-894046 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-894046 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.249590701s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 ip
2025/11/23 09:25:12 [DEBUG] GET http://192.168.39.58:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.00s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.708452ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-894046
addons_test.go:332: (dbg) Run:  kubectl --context addons-894046 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-wgh9c" [17b2510e-8c61-4100-b392-3094eb3babda] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004802059s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 addons disable inspektor-gadget --alsologtostderr -v=1: (5.917611737s)
--- PASS: TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.876129ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ngrml" [593c3ff1-a033-4c7c-add4-d09e6fb259d2] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007515072s
addons_test.go:463: (dbg) Run:  kubectl --context addons-894046 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 addons disable metrics-server --alsologtostderr -v=1: (1.478314097s)
--- PASS: TestAddons/parallel/MetricsServer (6.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 09:25:12.644596    7590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 09:25:12.653118    7590 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 09:25:12.653144    7590 kapi.go:107] duration metric: took 8.559152ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.569318ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-894046 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-894046 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2df37e25-f5f9-45a7-a590-20ad818cb1a4] Pending
helpers_test.go:352: "task-pv-pod" [2df37e25-f5f9-45a7-a590-20ad818cb1a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2df37e25-f5f9-45a7-a590-20ad818cb1a4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004593725s
addons_test.go:572: (dbg) Run:  kubectl --context addons-894046 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-894046 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-894046 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-894046 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-894046 delete pod task-pv-pod: (1.289244859s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-894046 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-894046 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-894046 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [5127909f-9b34-4de2-9f6a-7835c1b03a25] Pending
helpers_test.go:352: "task-pv-pod-restore" [5127909f-9b34-4de2-9f6a-7835c1b03a25] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [5127909f-9b34-4de2-9f6a-7835c1b03a25] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004488373s
addons_test.go:614: (dbg) Run:  kubectl --context addons-894046 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-894046 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-894046 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.869864649s)
--- PASS: TestAddons/parallel/CSI (56.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-894046 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-gnq7c" [a1c4d30d-d97b-42e5-b352-17367f644fe6] Pending
helpers_test.go:352: "headlamp-dfcdc64b-gnq7c" [a1c4d30d-d97b-42e5-b352-17367f644fe6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-gnq7c" [a1c4d30d-d97b-42e5-b352-17367f644fe6] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003527096s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 addons disable headlamp --alsologtostderr -v=1: (5.770873851s)
--- PASS: TestAddons/parallel/Headlamp (21.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-vvtnv" [a12caa09-5671-43ed-a857-b62039a7dfc8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00477709s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-894046 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-894046 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [20b3bb8b-ee48-4e27-9b75-a85d5848d47b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [20b3bb8b-ee48-4e27-9b75-a85d5848d47b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [20b3bb8b-ee48-4e27-9b75-a85d5848d47b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.005612676s
addons_test.go:967: (dbg) Run:  kubectl --context addons-894046 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 ssh "cat /opt/local-path-provisioner/pvc-8c9015e7-12ea-468b-b7fc-daa74eb34219_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-894046 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-894046 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.199094699s)
--- PASS: TestAddons/parallel/LocalPath (58.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7qrmh" [493cdf00-f9b3-4b3e-8a0a-e8c7f74af685] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.027934842s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bxhts" [a4d53484-d746-452d-bd06-6c181efb33fe] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004233425s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-894046 addons disable yakd --alsologtostderr -v=1: (6.478789348s)
--- PASS: TestAddons/parallel/Yakd (12.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (84.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-894046
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-894046: (1m24.776109995s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-894046
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-894046
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-894046
--- PASS: TestAddons/StoppedEnableDisable (84.97s)

                                                
                                    
x
+
TestCertOptions (43.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-956522 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-956522 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (42.504831684s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-956522 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-956522 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-956522 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-956522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-956522
--- PASS: TestCertOptions (43.73s)

                                                
                                    
x
+
TestCertExpiration (305.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-238017 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-238017 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m16.35862363s)
E1123 10:22:38.477280    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-238017 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-238017 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (48.51675982s)
helpers_test.go:175: Cleaning up "cert-expiration-238017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-238017
I1123 10:25:50.298462    7590 config.go:182] Loaded profile config "kindnet-546508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestCertExpiration (305.81s)

                                                
                                    
x
+
TestForceSystemdFlag (67.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-604221 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-604221 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.367999088s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-604221 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-604221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-604221
--- PASS: TestForceSystemdFlag (67.42s)

                                                
                                    
x
+
TestForceSystemdEnv (44.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-680045 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-680045 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.162496653s)
helpers_test.go:175: Cleaning up "force-systemd-env-680045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-680045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-680045: (1.855739295s)
--- PASS: TestForceSystemdEnv (44.02s)

                                                
                                    
x
+
TestErrorSpam/setup (36.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-115890 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-115890 --driver=kvm2  --container-runtime=crio
E1123 09:29:26.897624    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:26.904025    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:26.915388    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:26.936777    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:26.978125    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:27.059564    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:27.221066    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:27.542725    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:28.184819    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:29.467383    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:32.030271    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:29:37.151801    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-115890 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-115890 --driver=kvm2  --container-runtime=crio: (36.341612699s)
--- PASS: TestErrorSpam/setup (36.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (4.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 stop: (2.158950184s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 stop: (1.431725171s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 stop
E1123 09:29:47.393709    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-115890 --log_dir /tmp/nospam-115890 stop: (1.338873595s)
--- PASS: TestErrorSpam/stop (4.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21968-3638/.minikube/files/etc/test/nested/copy/7590/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-173031 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1123 09:30:07.875373    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:30:48.838633    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-173031 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m25.710768538s)
--- PASS: TestFunctional/serial/StartWithProxy (85.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 09:31:14.362860    7590 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-173031 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-173031 --alsologtostderr -v=8: (32.161985468s)
functional_test.go:678: soft start took 32.162674697s for "functional-173031" cluster.
I1123 09:31:46.525219    7590 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (32.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-173031 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 cache add registry.k8s.io/pause:3.1: (1.065933575s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 cache add registry.k8s.io/pause:3.3: (1.135649563s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 cache add registry.k8s.io/pause:latest: (1.120568572s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-173031 /tmp/TestFunctionalserialCacheCmdcacheadd_local1303106819/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cache add minikube-local-cache-test:functional-173031
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 cache add minikube-local-cache-test:functional-173031: (2.309104391s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cache delete minikube-local-cache-test:functional-173031
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-173031
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (170.820866ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 kubectl -- --context functional-173031 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-173031 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-173031 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 09:32:10.759965    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-173031 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.083435756s)
functional_test.go:776: restart took 36.083566016s for "functional-173031" cluster.
I1123 09:32:30.880739    7590 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-173031 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 logs: (1.402227526s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 logs --file /tmp/TestFunctionalserialLogsFileCmd2035195367/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 logs --file /tmp/TestFunctionalserialLogsFileCmd2035195367/001/logs.txt: (1.389186609s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.65s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-173031 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-173031
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-173031: exit status 115 (231.871137ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.231:31114 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-173031 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-173031 delete -f testdata/invalidsvc.yaml: (1.222664168s)
--- PASS: TestFunctional/serial/InvalidService (4.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 config get cpus: exit status 14 (58.648528ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 config get cpus: exit status 14 (63.706615ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-173031 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-173031 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 13935: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-173031 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-173031 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (119.556008ms)

                                                
                                                
-- stdout --
	* [functional-173031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:32:51.177255   13842 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:32:51.177556   13842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:51.177565   13842 out.go:374] Setting ErrFile to fd 2...
	I1123 09:32:51.177569   13842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:51.177744   13842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 09:32:51.178186   13842 out.go:368] Setting JSON to false
	I1123 09:32:51.179001   13842 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":909,"bootTime":1763889462,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:32:51.179058   13842 start.go:143] virtualization: kvm guest
	I1123 09:32:51.181134   13842 out.go:179] * [functional-173031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:32:51.182898   13842 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:32:51.182883   13842 notify.go:221] Checking for updates...
	I1123 09:32:51.184142   13842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:32:51.185391   13842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 09:32:51.186563   13842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 09:32:51.187614   13842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:32:51.188566   13842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:32:51.189980   13842 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:32:51.190458   13842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:32:51.229956   13842 out.go:179] * Using the kvm2 driver based on existing profile
	I1123 09:32:51.231139   13842 start.go:309] selected driver: kvm2
	I1123 09:32:51.231155   13842 start.go:927] validating driver "kvm2" against &{Name:functional-173031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-173031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:32:51.231304   13842 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:32:51.233518   13842 out.go:203] 
	W1123 09:32:51.234809   13842 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 09:32:51.235907   13842 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-173031 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-173031 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-173031 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (114.55605ms)

                                                
                                                
-- stdout --
	* [functional-173031] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:32:51.062676   13826 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:32:51.062779   13826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:51.062785   13826 out.go:374] Setting ErrFile to fd 2...
	I1123 09:32:51.062791   13826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:32:51.063219   13826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 09:32:51.063727   13826 out.go:368] Setting JSON to false
	I1123 09:32:51.064599   13826 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":909,"bootTime":1763889462,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:32:51.064654   13826 start.go:143] virtualization: kvm guest
	I1123 09:32:51.066526   13826 out.go:179] * [functional-173031] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 09:32:51.068212   13826 notify.go:221] Checking for updates...
	I1123 09:32:51.068261   13826 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 09:32:51.069498   13826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:32:51.070867   13826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 09:32:51.072249   13826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 09:32:51.073376   13826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:32:51.074447   13826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:32:51.075928   13826 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:32:51.076426   13826 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:32:51.109124   13826 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1123 09:32:51.110879   13826 start.go:309] selected driver: kvm2
	I1123 09:32:51.110891   13826 start.go:927] validating driver "kvm2" against &{Name:functional-173031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-173031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:32:51.111001   13826 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:32:51.112887   13826 out.go:203] 
	W1123 09:32:51.113912   13826 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 09:32:51.114908   13826 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-173031 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-173031 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-f85xt" [37f48869-8c8b-4442-96e9-19420b901ca3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-f85xt" [37f48869-8c8b-4442-96e9-19420b901ca3] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004848037s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.231:32059
functional_test.go:1680: http://192.168.39.231:32059: success! body:
Request served by hello-node-connect-7d85dfc575-f85xt

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.231:32059
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.42s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (53.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2dd6b76f-50e8-4665-9254-c6e36f9e7566] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.045126619s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-173031 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-173031 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-173031 get pvc myclaim -o=json
I1123 09:32:43.741934    7590 retry.go:31] will retry after 2.523984986s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:52371fa9-9abf-4ca3-95da-71fc8895ef03 ResourceVersion:723 Generation:0 CreationTimestamp:2025-11-23 09:32:43 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00177d0b0 VolumeMode:0xc00177d0c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-173031 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-173031 apply -f testdata/storage-provisioner/pod.yaml
I1123 09:32:46.448859    7590 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d2cd6d28-1020-4090-8adf-e29d0ee61ccc] Pending
helpers_test.go:352: "sp-pod" [d2cd6d28-1020-4090-8adf-e29d0ee61ccc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d2cd6d28-1020-4090-8adf-e29d0ee61ccc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.005783422s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-173031 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-173031 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-173031 delete -f testdata/storage-provisioner/pod.yaml: (1.118240181s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-173031 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9a7b3cf9-0bcb-4c9b-878b-fbfdc1be5ad0] Pending
helpers_test.go:352: "sp-pod" [9a7b3cf9-0bcb-4c9b-878b-fbfdc1be5ad0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9a7b3cf9-0bcb-4c9b-878b-fbfdc1be5ad0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.003011373s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-173031 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (53.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh -n functional-173031 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cp functional-173031:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2376070879/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh -n functional-173031 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh -n functional-173031 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-173031 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-v2dkg" [380c960c-680a-4396-84a9-c0b50058d7b3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-v2dkg" [380c960c-680a-4396-84a9-c0b50058d7b3] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.005347087s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-173031 exec mysql-5bb876957f-v2dkg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-173031 exec mysql-5bb876957f-v2dkg -- mysql -ppassword -e "show databases;": exit status 1 (125.550774ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 09:33:28.975488    7590 retry.go:31] will retry after 647.176888ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-173031 exec mysql-5bb876957f-v2dkg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-173031 exec mysql-5bb876957f-v2dkg -- mysql -ppassword -e "show databases;": exit status 1 (119.503899ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 09:33:29.743006    7590 retry.go:31] will retry after 1.789915866s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-173031 exec mysql-5bb876957f-v2dkg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7590/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo cat /etc/test/nested/copy/7590/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7590.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo cat /etc/ssl/certs/7590.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7590.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo cat /usr/share/ca-certificates/7590.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/75902.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo cat /etc/ssl/certs/75902.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/75902.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo cat /usr/share/ca-certificates/75902.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-173031 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 ssh "sudo systemctl is-active docker": exit status 1 (186.032315ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 ssh "sudo systemctl is-active containerd": exit status 1 (175.179231ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-173031 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-173031 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-722ct" [030d7f03-aa6a-476e-b758-f561e5d60579] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-722ct" [030d7f03-aa6a-476e-b758-f561e5d60579] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003987645s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "253.431097ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.820128ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "231.553149ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.064254ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdany-port1814245882/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763890360123176724" to /tmp/TestFunctionalparallelMountCmdany-port1814245882/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763890360123176724" to /tmp/TestFunctionalparallelMountCmdany-port1814245882/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763890360123176724" to /tmp/TestFunctionalparallelMountCmdany-port1814245882/001/test-1763890360123176724
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (148.470542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:32:40.271911    7590 retry.go:31] will retry after 561.070613ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 09:32 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 09:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 09:32 test-1763890360123176724
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh cat /mount-9p/test-1763890360123176724
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-173031 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [1eec14c8-4186-404e-aabd-4373aaf6ab6a] Pending
helpers_test.go:352: "busybox-mount" [1eec14c8-4186-404e-aabd-4373aaf6ab6a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [1eec14c8-4186-404e-aabd-4373aaf6ab6a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [1eec14c8-4186-404e-aabd-4373aaf6ab6a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.004127739s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-173031 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdany-port1814245882/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 service list -o json
functional_test.go:1504: Took "282.795044ms" to run "out/minikube-linux-amd64 -p functional-173031 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.231:31037
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.231:31037
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-173031 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-173031
localhost/kicbase/echo-server:functional-173031
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-173031 image ls --format short --alsologtostderr:
I1123 09:33:05.446032   14602 out.go:360] Setting OutFile to fd 1 ...
I1123 09:33:05.446294   14602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:05.446305   14602 out.go:374] Setting ErrFile to fd 2...
I1123 09:33:05.446310   14602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:05.446548   14602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
I1123 09:33:05.447081   14602 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:05.447174   14602 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:05.449121   14602 ssh_runner.go:195] Run: systemctl --version
I1123 09:33:05.451313   14602 main.go:143] libmachine: domain functional-173031 has defined MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:05.451645   14602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:07", ip: ""} in network mk-functional-173031: {Iface:virbr1 ExpiryTime:2025-11-23 10:30:04 +0000 UTC Type:0 Mac:52:54:00:8d:25:07 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-173031 Clientid:01:52:54:00:8d:25:07}
I1123 09:33:05.451669   14602 main.go:143] libmachine: domain functional-173031 has defined IP address 192.168.39.231 and MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:05.451828   14602 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/functional-173031/id_rsa Username:docker}
I1123 09:33:05.542655   14602 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls --format table --alsologtostderr
I1123 09:33:05.873235    7590 detect.go:223] nested VM detected
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-173031 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-173031  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-173031  │ 318cb97584835 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-173031 image ls --format table --alsologtostderr:
I1123 09:33:05.897893   14633 out.go:360] Setting OutFile to fd 1 ...
I1123 09:33:05.898247   14633 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:05.898264   14633 out.go:374] Setting ErrFile to fd 2...
I1123 09:33:05.898271   14633 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:05.898591   14633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
I1123 09:33:05.899933   14633 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:05.900157   14633 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:05.902616   14633 ssh_runner.go:195] Run: systemctl --version
I1123 09:33:05.905326   14633 main.go:143] libmachine: domain functional-173031 has defined MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:05.905782   14633 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:07", ip: ""} in network mk-functional-173031: {Iface:virbr1 ExpiryTime:2025-11-23 10:30:04 +0000 UTC Type:0 Mac:52:54:00:8d:25:07 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-173031 Clientid:01:52:54:00:8d:25:07}
I1123 09:33:05.905809   14633 main.go:143] libmachine: domain functional-173031 has defined IP address 192.168.39.231 and MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:05.905990   14633 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/functional-173031/id_rsa Username:docker}
I1123 09:33:06.013140   14633 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-173031 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-173031"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b4610899694
49f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"318cb975848354211abe54207189b7dcddd70593beafe09e411d432adaf266fb","repoDigests":["localhost/minikube-local-ca
che-test@sha256:89ea556fa1b79e22e8158a3f08bf8086148348d79462f675b5ddb18de3a57438"],"repoTags":["localhost/minikube-local-cache-test:functional-173031"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sh
a256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],
"size":"76004181"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e8
43f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425
bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-173031 image ls --format json --alsologtostderr:
I1123 09:33:05.653605   14612 out.go:360] Setting OutFile to fd 1 ...
I1123 09:33:05.654047   14612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:05.654060   14612 out.go:374] Setting ErrFile to fd 2...
I1123 09:33:05.654066   14612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:05.654303   14612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
I1123 09:33:05.654896   14612 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:05.654999   14612 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:05.657138   14612 ssh_runner.go:195] Run: systemctl --version
I1123 09:33:05.659656   14612 main.go:143] libmachine: domain functional-173031 has defined MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:05.660121   14612 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:07", ip: ""} in network mk-functional-173031: {Iface:virbr1 ExpiryTime:2025-11-23 10:30:04 +0000 UTC Type:0 Mac:52:54:00:8d:25:07 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-173031 Clientid:01:52:54:00:8d:25:07}
I1123 09:33:05.660146   14612 main.go:143] libmachine: domain functional-173031 has defined IP address 192.168.39.231 and MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:05.660283   14612 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/functional-173031/id_rsa Username:docker}
I1123 09:33:05.766873   14612 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-173031 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-173031
size: "4944818"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 318cb975848354211abe54207189b7dcddd70593beafe09e411d432adaf266fb
repoDigests:
- localhost/minikube-local-cache-test@sha256:89ea556fa1b79e22e8158a3f08bf8086148348d79462f675b5ddb18de3a57438
repoTags:
- localhost/minikube-local-cache-test:functional-173031
size: "3330"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-173031 image ls --format yaml --alsologtostderr:
I1123 09:33:06.144610   14645 out.go:360] Setting OutFile to fd 1 ...
I1123 09:33:06.144964   14645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:06.144977   14645 out.go:374] Setting ErrFile to fd 2...
I1123 09:33:06.144983   14645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:06.145305   14645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
I1123 09:33:06.146113   14645 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:06.146267   14645 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:06.148777   14645 ssh_runner.go:195] Run: systemctl --version
I1123 09:33:06.151325   14645 main.go:143] libmachine: domain functional-173031 has defined MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:06.151798   14645 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:07", ip: ""} in network mk-functional-173031: {Iface:virbr1 ExpiryTime:2025-11-23 10:30:04 +0000 UTC Type:0 Mac:52:54:00:8d:25:07 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-173031 Clientid:01:52:54:00:8d:25:07}
I1123 09:33:06.151829   14645 main.go:143] libmachine: domain functional-173031 has defined IP address 192.168.39.231 and MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:06.152018   14645 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/functional-173031/id_rsa Username:docker}
I1123 09:33:06.246500   14645 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 ssh pgrep buildkitd: exit status 1 (173.849934ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image build -t localhost/my-image:functional-173031 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 image build -t localhost/my-image:functional-173031 testdata/build --alsologtostderr: (6.698312531s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-173031 image build -t localhost/my-image:functional-173031 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 80c69a97e1a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-173031
--> d74e6e74d80
Successfully tagged localhost/my-image:functional-173031
d74e6e74d80a85c4ef65d4e7ee77d00d540effaf9ab909e734cb5c24652eb972
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-173031 image build -t localhost/my-image:functional-173031 testdata/build --alsologtostderr:
I1123 09:33:06.554972   14666 out.go:360] Setting OutFile to fd 1 ...
I1123 09:33:06.555103   14666 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:06.555112   14666 out.go:374] Setting ErrFile to fd 2...
I1123 09:33:06.555116   14666 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:33:06.555287   14666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
I1123 09:33:06.555804   14666 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:06.556361   14666 config.go:182] Loaded profile config "functional-173031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:33:06.558472   14666 ssh_runner.go:195] Run: systemctl --version
I1123 09:33:06.560955   14666 main.go:143] libmachine: domain functional-173031 has defined MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:06.561441   14666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:07", ip: ""} in network mk-functional-173031: {Iface:virbr1 ExpiryTime:2025-11-23 10:30:04 +0000 UTC Type:0 Mac:52:54:00:8d:25:07 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-173031 Clientid:01:52:54:00:8d:25:07}
I1123 09:33:06.561462   14666 main.go:143] libmachine: domain functional-173031 has defined IP address 192.168.39.231 and MAC address 52:54:00:8d:25:07 in network mk-functional-173031
I1123 09:33:06.561640   14666 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/functional-173031/id_rsa Username:docker}
I1123 09:33:06.659763   14666 build_images.go:162] Building image from path: /tmp/build.1281104265.tar
I1123 09:33:06.659819   14666 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 09:33:06.679031   14666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1281104265.tar
I1123 09:33:06.688621   14666 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1281104265.tar: stat -c "%s %y" /var/lib/minikube/build/build.1281104265.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1281104265.tar': No such file or directory
I1123 09:33:06.688665   14666 ssh_runner.go:362] scp /tmp/build.1281104265.tar --> /var/lib/minikube/build/build.1281104265.tar (3072 bytes)
I1123 09:33:06.746457   14666 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1281104265
I1123 09:33:06.764933   14666 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1281104265 -xf /var/lib/minikube/build/build.1281104265.tar
I1123 09:33:06.776970   14666 crio.go:315] Building image: /var/lib/minikube/build/build.1281104265
I1123 09:33:06.777027   14666 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-173031 /var/lib/minikube/build/build.1281104265 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1123 09:33:13.166803   14666 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-173031 /var/lib/minikube/build/build.1281104265 --cgroup-manager=cgroupfs: (6.389754407s)
I1123 09:33:13.166880   14666 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1281104265
I1123 09:33:13.180468   14666 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1281104265.tar
I1123 09:33:13.192636   14666 build_images.go:218] Built localhost/my-image:functional-173031 from /tmp/build.1281104265.tar
I1123 09:33:13.192668   14666 build_images.go:134] succeeded building to: functional-173031
I1123 09:33:13.192673   14666 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls
2025/11/23 09:33:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.359888322s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-173031
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdspecific-port2319147828/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (193.729775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:32:54.311836    7590 retry.go:31] will retry after 590.690722ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdspecific-port2319147828/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 ssh "sudo umount -f /mount-9p": exit status 1 (181.75ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-173031 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdspecific-port2319147828/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image load --daemon kicbase/echo-server:functional-173031 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 image load --daemon kicbase/echo-server:functional-173031 --alsologtostderr: (2.674234497s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-173031 image ls: (1.251291638s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4189886641/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4189886641/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4189886641/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T" /mount1: exit status 1 (204.382044ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:32:55.895978    7590 retry.go:31] will retry after 652.103333ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-173031 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4189886641/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4189886641/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-173031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4189886641/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image load --daemon kicbase/echo-server:functional-173031 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.14989051s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-173031
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image load --daemon kicbase/echo-server:functional-173031 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image save kicbase/echo-server:functional-173031 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image rm kicbase/echo-server:functional-173031 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-173031
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-173031 image save --daemon kicbase/echo-server:functional-173031 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-173031
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-173031
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-173031
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-173031
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (214.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1123 09:34:26.892846    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:34:54.602250    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m34.30423053s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (214.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 kubectl -- rollout status deployment/busybox: (8.55596105s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-8dvph -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-cv42k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-lw8fx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-8dvph -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-cv42k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-lw8fx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-8dvph -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-cv42k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-lw8fx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-8dvph -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-8dvph -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-cv42k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-cv42k -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-lw8fx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 kubectl -- exec busybox-7b57f96db7-lw8fx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 node add --alsologtostderr -v 5
E1123 09:37:38.477105    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:38.483541    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:38.494922    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:38.516848    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:38.558288    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:38.639717    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:38.801902    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:39.124128    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:39.766129    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:41.047954    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:43.610085    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:48.732139    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:37:58.973418    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 node add --alsologtostderr -v 5: (45.529243731s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-808217 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp testdata/cp-test.txt ha-808217:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2479235803/001/cp-test_ha-808217.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217:/home/docker/cp-test.txt ha-808217-m02:/home/docker/cp-test_ha-808217_ha-808217-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m02 "sudo cat /home/docker/cp-test_ha-808217_ha-808217-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217:/home/docker/cp-test.txt ha-808217-m03:/home/docker/cp-test_ha-808217_ha-808217-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m03 "sudo cat /home/docker/cp-test_ha-808217_ha-808217-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217:/home/docker/cp-test.txt ha-808217-m04:/home/docker/cp-test_ha-808217_ha-808217-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m04 "sudo cat /home/docker/cp-test_ha-808217_ha-808217-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp testdata/cp-test.txt ha-808217-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2479235803/001/cp-test_ha-808217-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m02:/home/docker/cp-test.txt ha-808217:/home/docker/cp-test_ha-808217-m02_ha-808217.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217 "sudo cat /home/docker/cp-test_ha-808217-m02_ha-808217.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m02:/home/docker/cp-test.txt ha-808217-m03:/home/docker/cp-test_ha-808217-m02_ha-808217-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m03 "sudo cat /home/docker/cp-test_ha-808217-m02_ha-808217-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m02:/home/docker/cp-test.txt ha-808217-m04:/home/docker/cp-test_ha-808217-m02_ha-808217-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m04 "sudo cat /home/docker/cp-test_ha-808217-m02_ha-808217-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp testdata/cp-test.txt ha-808217-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2479235803/001/cp-test_ha-808217-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m03:/home/docker/cp-test.txt ha-808217:/home/docker/cp-test_ha-808217-m03_ha-808217.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217 "sudo cat /home/docker/cp-test_ha-808217-m03_ha-808217.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m03:/home/docker/cp-test.txt ha-808217-m02:/home/docker/cp-test_ha-808217-m03_ha-808217-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m02 "sudo cat /home/docker/cp-test_ha-808217-m03_ha-808217-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m03:/home/docker/cp-test.txt ha-808217-m04:/home/docker/cp-test_ha-808217-m03_ha-808217-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m04 "sudo cat /home/docker/cp-test_ha-808217-m03_ha-808217-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp testdata/cp-test.txt ha-808217-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2479235803/001/cp-test_ha-808217-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m04:/home/docker/cp-test.txt ha-808217:/home/docker/cp-test_ha-808217-m04_ha-808217.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217 "sudo cat /home/docker/cp-test_ha-808217-m04_ha-808217.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m04:/home/docker/cp-test.txt ha-808217-m02:/home/docker/cp-test_ha-808217-m04_ha-808217-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m02 "sudo cat /home/docker/cp-test_ha-808217-m04_ha-808217-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 cp ha-808217-m04:/home/docker/cp-test.txt ha-808217-m03:/home/docker/cp-test_ha-808217-m04_ha-808217-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 ssh -n ha-808217-m03 "sudo cat /home/docker/cp-test_ha-808217-m04_ha-808217-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (89.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 node stop m02 --alsologtostderr -v 5
E1123 09:38:19.454702    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:39:00.417513    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:39:26.892972    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 node stop m02 --alsologtostderr -v 5: (1m28.516357708s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5: exit status 7 (496.308477ms)

                                                
                                                
-- stdout --
	ha-808217
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-808217-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-808217-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-808217-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:39:46.578232   17857 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:39:46.578456   17857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:39:46.578463   17857 out.go:374] Setting ErrFile to fd 2...
	I1123 09:39:46.578467   17857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:39:46.578635   17857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 09:39:46.578783   17857 out.go:368] Setting JSON to false
	I1123 09:39:46.578808   17857 mustload.go:66] Loading cluster: ha-808217
	I1123 09:39:46.578871   17857 notify.go:221] Checking for updates...
	I1123 09:39:46.579218   17857 config.go:182] Loaded profile config "ha-808217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:39:46.579241   17857 status.go:174] checking status of ha-808217 ...
	I1123 09:39:46.581166   17857 status.go:371] ha-808217 host status = "Running" (err=<nil>)
	I1123 09:39:46.581185   17857 host.go:66] Checking if "ha-808217" exists ...
	I1123 09:39:46.583752   17857 main.go:143] libmachine: domain ha-808217 has defined MAC address 52:54:00:74:c4:de in network mk-ha-808217
	I1123 09:39:46.584167   17857 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c4:de", ip: ""} in network mk-ha-808217: {Iface:virbr1 ExpiryTime:2025-11-23 10:33:48 +0000 UTC Type:0 Mac:52:54:00:74:c4:de Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-808217 Clientid:01:52:54:00:74:c4:de}
	I1123 09:39:46.584193   17857 main.go:143] libmachine: domain ha-808217 has defined IP address 192.168.39.218 and MAC address 52:54:00:74:c4:de in network mk-ha-808217
	I1123 09:39:46.584420   17857 host.go:66] Checking if "ha-808217" exists ...
	I1123 09:39:46.584600   17857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:39:46.586785   17857 main.go:143] libmachine: domain ha-808217 has defined MAC address 52:54:00:74:c4:de in network mk-ha-808217
	I1123 09:39:46.587195   17857 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:74:c4:de", ip: ""} in network mk-ha-808217: {Iface:virbr1 ExpiryTime:2025-11-23 10:33:48 +0000 UTC Type:0 Mac:52:54:00:74:c4:de Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-808217 Clientid:01:52:54:00:74:c4:de}
	I1123 09:39:46.587239   17857 main.go:143] libmachine: domain ha-808217 has defined IP address 192.168.39.218 and MAC address 52:54:00:74:c4:de in network mk-ha-808217
	I1123 09:39:46.587401   17857 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/ha-808217/id_rsa Username:docker}
	I1123 09:39:46.674462   17857 ssh_runner.go:195] Run: systemctl --version
	I1123 09:39:46.681516   17857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:39:46.702366   17857 kubeconfig.go:125] found "ha-808217" server: "https://192.168.39.254:8443"
	I1123 09:39:46.702410   17857 api_server.go:166] Checking apiserver status ...
	I1123 09:39:46.702458   17857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:39:46.724863   17857 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1375/cgroup
	W1123 09:39:46.736506   17857 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1375/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:39:46.736560   17857 ssh_runner.go:195] Run: ls
	I1123 09:39:46.741595   17857 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1123 09:39:46.746574   17857 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1123 09:39:46.746594   17857 status.go:463] ha-808217 apiserver status = Running (err=<nil>)
	I1123 09:39:46.746601   17857 status.go:176] ha-808217 status: &{Name:ha-808217 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:39:46.746621   17857 status.go:174] checking status of ha-808217-m02 ...
	I1123 09:39:46.748222   17857 status.go:371] ha-808217-m02 host status = "Stopped" (err=<nil>)
	I1123 09:39:46.748237   17857 status.go:384] host is not running, skipping remaining checks
	I1123 09:39:46.748242   17857 status.go:176] ha-808217-m02 status: &{Name:ha-808217-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:39:46.748254   17857 status.go:174] checking status of ha-808217-m03 ...
	I1123 09:39:46.749448   17857 status.go:371] ha-808217-m03 host status = "Running" (err=<nil>)
	I1123 09:39:46.749468   17857 host.go:66] Checking if "ha-808217-m03" exists ...
	I1123 09:39:46.751658   17857 main.go:143] libmachine: domain ha-808217-m03 has defined MAC address 52:54:00:5f:d8:0e in network mk-ha-808217
	I1123 09:39:46.752076   17857 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5f:d8:0e", ip: ""} in network mk-ha-808217: {Iface:virbr1 ExpiryTime:2025-11-23 10:35:51 +0000 UTC Type:0 Mac:52:54:00:5f:d8:0e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-808217-m03 Clientid:01:52:54:00:5f:d8:0e}
	I1123 09:39:46.752097   17857 main.go:143] libmachine: domain ha-808217-m03 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:d8:0e in network mk-ha-808217
	I1123 09:39:46.752241   17857 host.go:66] Checking if "ha-808217-m03" exists ...
	I1123 09:39:46.752416   17857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:39:46.754777   17857 main.go:143] libmachine: domain ha-808217-m03 has defined MAC address 52:54:00:5f:d8:0e in network mk-ha-808217
	I1123 09:39:46.755267   17857 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5f:d8:0e", ip: ""} in network mk-ha-808217: {Iface:virbr1 ExpiryTime:2025-11-23 10:35:51 +0000 UTC Type:0 Mac:52:54:00:5f:d8:0e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-808217-m03 Clientid:01:52:54:00:5f:d8:0e}
	I1123 09:39:46.755299   17857 main.go:143] libmachine: domain ha-808217-m03 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:d8:0e in network mk-ha-808217
	I1123 09:39:46.755446   17857 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/ha-808217-m03/id_rsa Username:docker}
	I1123 09:39:46.842042   17857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:39:46.863730   17857 kubeconfig.go:125] found "ha-808217" server: "https://192.168.39.254:8443"
	I1123 09:39:46.863764   17857 api_server.go:166] Checking apiserver status ...
	I1123 09:39:46.863809   17857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:39:46.884367   17857 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1778/cgroup
	W1123 09:39:46.898076   17857 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1778/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:39:46.898122   17857 ssh_runner.go:195] Run: ls
	I1123 09:39:46.903579   17857 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1123 09:39:46.908545   17857 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1123 09:39:46.908564   17857 status.go:463] ha-808217-m03 apiserver status = Running (err=<nil>)
	I1123 09:39:46.908571   17857 status.go:176] ha-808217-m03 status: &{Name:ha-808217-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:39:46.908592   17857 status.go:174] checking status of ha-808217-m04 ...
	I1123 09:39:46.910037   17857 status.go:371] ha-808217-m04 host status = "Running" (err=<nil>)
	I1123 09:39:46.910059   17857 host.go:66] Checking if "ha-808217-m04" exists ...
	I1123 09:39:46.912320   17857 main.go:143] libmachine: domain ha-808217-m04 has defined MAC address 52:54:00:62:a7:97 in network mk-ha-808217
	I1123 09:39:46.912724   17857 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:a7:97", ip: ""} in network mk-ha-808217: {Iface:virbr1 ExpiryTime:2025-11-23 10:37:37 +0000 UTC Type:0 Mac:52:54:00:62:a7:97 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-808217-m04 Clientid:01:52:54:00:62:a7:97}
	I1123 09:39:46.912752   17857 main.go:143] libmachine: domain ha-808217-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:62:a7:97 in network mk-ha-808217
	I1123 09:39:46.912894   17857 host.go:66] Checking if "ha-808217-m04" exists ...
	I1123 09:39:46.913142   17857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:39:46.915093   17857 main.go:143] libmachine: domain ha-808217-m04 has defined MAC address 52:54:00:62:a7:97 in network mk-ha-808217
	I1123 09:39:46.915443   17857 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:a7:97", ip: ""} in network mk-ha-808217: {Iface:virbr1 ExpiryTime:2025-11-23 10:37:37 +0000 UTC Type:0 Mac:52:54:00:62:a7:97 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-808217-m04 Clientid:01:52:54:00:62:a7:97}
	I1123 09:39:46.915461   17857 main.go:143] libmachine: domain ha-808217-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:62:a7:97 in network mk-ha-808217
	I1123 09:39:46.915595   17857 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/ha-808217-m04/id_rsa Username:docker}
	I1123 09:39:46.999775   17857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:39:47.016832   17857 status.go:176] ha-808217-m04 status: &{Name:ha-808217-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (89.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 node start m02 --alsologtostderr -v 5
E1123 09:40:22.339686    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 node start m02 --alsologtostderr -v 5: (40.14795963s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.03500442s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (392.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 stop --alsologtostderr -v 5
E1123 09:42:38.476669    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:43:06.182849    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:44:26.895270    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 stop --alsologtostderr -v 5: (4m18.909186217s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 start --wait true --alsologtostderr -v 5
E1123 09:45:49.964446    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 start --wait true --alsologtostderr -v 5: (2m13.560250205s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (392.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 node delete m03 --alsologtostderr -v 5: (17.687807133s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (256.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 stop --alsologtostderr -v 5
E1123 09:47:38.477405    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:49:26.892920    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 stop --alsologtostderr -v 5: (4m16.604164682s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5: exit status 7 (60.586992ms)

                                                
                                                
-- stdout --
	ha-808217
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-808217-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-808217-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:51:37.659611   21169 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:51:37.659853   21169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:51:37.659862   21169 out.go:374] Setting ErrFile to fd 2...
	I1123 09:51:37.659866   21169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:51:37.660062   21169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 09:51:37.660219   21169 out.go:368] Setting JSON to false
	I1123 09:51:37.660248   21169 mustload.go:66] Loading cluster: ha-808217
	I1123 09:51:37.660310   21169 notify.go:221] Checking for updates...
	I1123 09:51:37.660776   21169 config.go:182] Loaded profile config "ha-808217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:51:37.660801   21169 status.go:174] checking status of ha-808217 ...
	I1123 09:51:37.662758   21169 status.go:371] ha-808217 host status = "Stopped" (err=<nil>)
	I1123 09:51:37.662770   21169 status.go:384] host is not running, skipping remaining checks
	I1123 09:51:37.662775   21169 status.go:176] ha-808217 status: &{Name:ha-808217 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:51:37.662789   21169 status.go:174] checking status of ha-808217-m02 ...
	I1123 09:51:37.663912   21169 status.go:371] ha-808217-m02 host status = "Stopped" (err=<nil>)
	I1123 09:51:37.663924   21169 status.go:384] host is not running, skipping remaining checks
	I1123 09:51:37.663928   21169 status.go:176] ha-808217-m02 status: &{Name:ha-808217-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:51:37.663952   21169 status.go:174] checking status of ha-808217-m04 ...
	I1123 09:51:37.664981   21169 status.go:371] ha-808217-m04 host status = "Stopped" (err=<nil>)
	I1123 09:51:37.664993   21169 status.go:384] host is not running, skipping remaining checks
	I1123 09:51:37.664997   21169 status.go:176] ha-808217-m04 status: &{Name:ha-808217-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (256.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (104.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1123 09:52:38.476108    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m43.548886071s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (104.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 node add --control-plane --alsologtostderr -v 5
E1123 09:54:01.546499    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:54:26.893106    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-808217 node add --control-plane --alsologtostderr -v 5: (1m22.016995492s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-808217 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-068212 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-068212 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.931510886s)
--- PASS: TestJSONOutput/start/Command (75.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-068212 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-068212 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-068212 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-068212 --output=json --user=testUser: (7.145353365s)
--- PASS: TestJSONOutput/stop/Command (7.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-511753 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-511753 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.93494ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c392dbc-9bbb-4c4d-b4a1-2a9ae6edfabb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-511753] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3073dac-7468-444c-a112-b88031e298d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"f24a1676-d34f-4b1b-b77e-7edf223534c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4278287b-6811-4ebd-8cb3-50ad1d7b75ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig"}}
	{"specversion":"1.0","id":"8707cc0b-a206-4d3c-b369-18dc562902bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube"}}
	{"specversion":"1.0","id":"f7a84129-f22a-4122-b74b-a995b0129a76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8ed56c86-9854-4b4f-8f30-1f530623cdc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"da1ce500-5fd8-427a-b692-426bbbf40dd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-511753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-511753
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (78.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-757941 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-757941 --driver=kvm2  --container-runtime=crio: (36.529122149s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-760898 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-760898 --driver=kvm2  --container-runtime=crio: (38.997535531s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-757941
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-760898
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-760898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-760898
helpers_test.go:175: Cleaning up "first-757941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-757941
--- PASS: TestMinikubeProfile (78.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-253617 --memory=3072 --mount-string /tmp/TestMountStartserial2981261880/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1123 09:57:38.476695    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-253617 --memory=3072 --mount-string /tmp/TestMountStartserial2981261880/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.082788339s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-253617 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-253617 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-271700 --memory=3072 --mount-string /tmp/TestMountStartserial2981261880/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-271700 --memory=3072 --mount-string /tmp/TestMountStartserial2981261880/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.790126648s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271700 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271700 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-253617 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271700 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271700 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-271700
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-271700: (1.288002274s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-271700
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-271700: (21.225739741s)
--- PASS: TestMountStart/serial/RestartStopped (22.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271700 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271700 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (101.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756144 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1123 09:59:26.893544    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756144 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.099284383s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (101.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-756144 -- rollout status deployment/busybox: (7.666222352s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-5rlm4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-fjz95 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-5rlm4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-fjz95 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-5rlm4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-fjz95 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-5rlm4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-5rlm4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-fjz95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756144 -- exec busybox-7b57f96db7-fjz95 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-756144 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-756144 -v=5 --alsologtostderr: (42.041728276s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-756144 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp testdata/cp-test.txt multinode-756144:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3452493746/001/cp-test_multinode-756144.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144:/home/docker/cp-test.txt multinode-756144-m02:/home/docker/cp-test_multinode-756144_multinode-756144-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m02 "sudo cat /home/docker/cp-test_multinode-756144_multinode-756144-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144:/home/docker/cp-test.txt multinode-756144-m03:/home/docker/cp-test_multinode-756144_multinode-756144-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m03 "sudo cat /home/docker/cp-test_multinode-756144_multinode-756144-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp testdata/cp-test.txt multinode-756144-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3452493746/001/cp-test_multinode-756144-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144-m02:/home/docker/cp-test.txt multinode-756144:/home/docker/cp-test_multinode-756144-m02_multinode-756144.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144 "sudo cat /home/docker/cp-test_multinode-756144-m02_multinode-756144.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144-m02:/home/docker/cp-test.txt multinode-756144-m03:/home/docker/cp-test_multinode-756144-m02_multinode-756144-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m03 "sudo cat /home/docker/cp-test_multinode-756144-m02_multinode-756144-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp testdata/cp-test.txt multinode-756144-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3452493746/001/cp-test_multinode-756144-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144-m03:/home/docker/cp-test.txt multinode-756144:/home/docker/cp-test_multinode-756144-m03_multinode-756144.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144 "sudo cat /home/docker/cp-test_multinode-756144-m03_multinode-756144.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 cp multinode-756144-m03:/home/docker/cp-test.txt multinode-756144-m02:/home/docker/cp-test_multinode-756144-m03_multinode-756144-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 ssh -n multinode-756144-m02 "sudo cat /home/docker/cp-test_multinode-756144-m03_multinode-756144-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-756144 node stop m03: (1.726847134s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756144 status: exit status 7 (326.455249ms)

                                                
                                                
-- stdout --
	multinode-756144
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-756144-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-756144-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756144 status --alsologtostderr: exit status 7 (326.496182ms)

                                                
                                                
-- stdout --
	multinode-756144
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-756144-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-756144-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:01:23.804600   26773 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:01:23.804870   26773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:01:23.804880   26773 out.go:374] Setting ErrFile to fd 2...
	I1123 10:01:23.804884   26773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:01:23.805068   26773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 10:01:23.805229   26773 out.go:368] Setting JSON to false
	I1123 10:01:23.805252   26773 mustload.go:66] Loading cluster: multinode-756144
	I1123 10:01:23.805362   26773 notify.go:221] Checking for updates...
	I1123 10:01:23.805601   26773 config.go:182] Loaded profile config "multinode-756144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:01:23.805617   26773 status.go:174] checking status of multinode-756144 ...
	I1123 10:01:23.807524   26773 status.go:371] multinode-756144 host status = "Running" (err=<nil>)
	I1123 10:01:23.807538   26773 host.go:66] Checking if "multinode-756144" exists ...
	I1123 10:01:23.810254   26773 main.go:143] libmachine: domain multinode-756144 has defined MAC address 52:54:00:7b:da:9e in network mk-multinode-756144
	I1123 10:01:23.810776   26773 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:da:9e", ip: ""} in network mk-multinode-756144: {Iface:virbr1 ExpiryTime:2025-11-23 10:58:56 +0000 UTC Type:0 Mac:52:54:00:7b:da:9e Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-756144 Clientid:01:52:54:00:7b:da:9e}
	I1123 10:01:23.810814   26773 main.go:143] libmachine: domain multinode-756144 has defined IP address 192.168.39.191 and MAC address 52:54:00:7b:da:9e in network mk-multinode-756144
	I1123 10:01:23.810984   26773 host.go:66] Checking if "multinode-756144" exists ...
	I1123 10:01:23.811229   26773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:01:23.813698   26773 main.go:143] libmachine: domain multinode-756144 has defined MAC address 52:54:00:7b:da:9e in network mk-multinode-756144
	I1123 10:01:23.814132   26773 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:da:9e", ip: ""} in network mk-multinode-756144: {Iface:virbr1 ExpiryTime:2025-11-23 10:58:56 +0000 UTC Type:0 Mac:52:54:00:7b:da:9e Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-756144 Clientid:01:52:54:00:7b:da:9e}
	I1123 10:01:23.814160   26773 main.go:143] libmachine: domain multinode-756144 has defined IP address 192.168.39.191 and MAC address 52:54:00:7b:da:9e in network mk-multinode-756144
	I1123 10:01:23.814318   26773 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/multinode-756144/id_rsa Username:docker}
	I1123 10:01:23.899102   26773 ssh_runner.go:195] Run: systemctl --version
	I1123 10:01:23.905666   26773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:01:23.923084   26773 kubeconfig.go:125] found "multinode-756144" server: "https://192.168.39.191:8443"
	I1123 10:01:23.923130   26773 api_server.go:166] Checking apiserver status ...
	I1123 10:01:23.923188   26773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:01:23.943034   26773 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup
	W1123 10:01:23.953955   26773 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:01:23.954012   26773 ssh_runner.go:195] Run: ls
	I1123 10:01:23.959359   26773 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I1123 10:01:23.965020   26773 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I1123 10:01:23.965037   26773 status.go:463] multinode-756144 apiserver status = Running (err=<nil>)
	I1123 10:01:23.965046   26773 status.go:176] multinode-756144 status: &{Name:multinode-756144 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:01:23.965059   26773 status.go:174] checking status of multinode-756144-m02 ...
	I1123 10:01:23.966423   26773 status.go:371] multinode-756144-m02 host status = "Running" (err=<nil>)
	I1123 10:01:23.966438   26773 host.go:66] Checking if "multinode-756144-m02" exists ...
	I1123 10:01:23.968508   26773 main.go:143] libmachine: domain multinode-756144-m02 has defined MAC address 52:54:00:e5:a5:a3 in network mk-multinode-756144
	I1123 10:01:23.968825   26773 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:a5:a3", ip: ""} in network mk-multinode-756144: {Iface:virbr1 ExpiryTime:2025-11-23 10:59:51 +0000 UTC Type:0 Mac:52:54:00:e5:a5:a3 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-756144-m02 Clientid:01:52:54:00:e5:a5:a3}
	I1123 10:01:23.968851   26773 main.go:143] libmachine: domain multinode-756144-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:e5:a5:a3 in network mk-multinode-756144
	I1123 10:01:23.968985   26773 host.go:66] Checking if "multinode-756144-m02" exists ...
	I1123 10:01:23.969156   26773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:01:23.970921   26773 main.go:143] libmachine: domain multinode-756144-m02 has defined MAC address 52:54:00:e5:a5:a3 in network mk-multinode-756144
	I1123 10:01:23.971360   26773 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:a5:a3", ip: ""} in network mk-multinode-756144: {Iface:virbr1 ExpiryTime:2025-11-23 10:59:51 +0000 UTC Type:0 Mac:52:54:00:e5:a5:a3 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-756144-m02 Clientid:01:52:54:00:e5:a5:a3}
	I1123 10:01:23.971390   26773 main.go:143] libmachine: domain multinode-756144-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:e5:a5:a3 in network mk-multinode-756144
	I1123 10:01:23.971509   26773 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21968-3638/.minikube/machines/multinode-756144-m02/id_rsa Username:docker}
	I1123 10:01:24.051594   26773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:01:24.070880   26773 status.go:176] multinode-756144-m02 status: &{Name:multinode-756144-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:01:24.070927   26773 status.go:174] checking status of multinode-756144-m03 ...
	I1123 10:01:24.072617   26773 status.go:371] multinode-756144-m03 host status = "Stopped" (err=<nil>)
	I1123 10:01:24.072632   26773 status.go:384] host is not running, skipping remaining checks
	I1123 10:01:24.072637   26773 status.go:176] multinode-756144-m03 status: &{Name:multinode-756144-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-756144 node start m03 -v=5 --alsologtostderr: (41.298902356s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (316.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756144
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-756144
E1123 10:02:29.967543    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:02:38.476776    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:04:26.897790    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-756144: (2m54.737818336s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756144 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756144 --wait=true -v=5 --alsologtostderr: (2m22.003928432s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756144
--- PASS: TestMultiNode/serial/RestartKeepsNodes (316.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-756144 node delete m03: (2.095446638s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (174.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 stop
E1123 10:07:38.476309    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:26.901178    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-756144 stop: (2m54.072787141s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756144 status: exit status 7 (60.184408ms)

                                                
                                                
-- stdout --
	multinode-756144
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-756144-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756144 status --alsologtostderr: exit status 7 (58.742474ms)

                                                
                                                
-- stdout --
	multinode-756144
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-756144-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:10:19.471548   29265 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:19.471769   29265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:19.471777   29265 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:19.471780   29265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:19.472002   29265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 10:10:19.472162   29265 out.go:368] Setting JSON to false
	I1123 10:10:19.472193   29265 mustload.go:66] Loading cluster: multinode-756144
	I1123 10:10:19.472247   29265 notify.go:221] Checking for updates...
	I1123 10:10:19.472492   29265 config.go:182] Loaded profile config "multinode-756144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:19.472512   29265 status.go:174] checking status of multinode-756144 ...
	I1123 10:10:19.474458   29265 status.go:371] multinode-756144 host status = "Stopped" (err=<nil>)
	I1123 10:10:19.474474   29265 status.go:384] host is not running, skipping remaining checks
	I1123 10:10:19.474487   29265 status.go:176] multinode-756144 status: &{Name:multinode-756144 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:10:19.474513   29265 status.go:174] checking status of multinode-756144-m02 ...
	I1123 10:10:19.475688   29265 status.go:371] multinode-756144-m02 host status = "Stopped" (err=<nil>)
	I1123 10:10:19.475700   29265 status.go:384] host is not running, skipping remaining checks
	I1123 10:10:19.475704   29265 status.go:176] multinode-756144-m02 status: &{Name:multinode-756144-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (174.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (120.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756144 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1123 10:10:41.548100    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756144 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m0.045113018s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756144 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (120.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756144
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756144-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-756144-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (73.074991ms)

                                                
                                                
-- stdout --
	* [multinode-756144-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-756144-m02' is duplicated with machine name 'multinode-756144-m02' in profile 'multinode-756144'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756144-m03 --driver=kvm2  --container-runtime=crio
E1123 10:12:38.476290    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756144-m03 --driver=kvm2  --container-runtime=crio: (41.806439094s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-756144
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-756144: exit status 80 (198.452268ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-756144 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-756144-m03 already exists in multinode-756144-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-756144-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.95s)

                                                
                                    
x
+
TestScheduledStopUnix (108.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-560737 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-560737 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.919630804s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560737 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:16:36.934043   31772 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:36.934140   31772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:36.934151   31772 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:36.934156   31772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:36.934388   31772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 10:16:36.934647   31772 out.go:368] Setting JSON to false
	I1123 10:16:36.934750   31772 mustload.go:66] Loading cluster: scheduled-stop-560737
	I1123 10:16:36.935091   31772 config.go:182] Loaded profile config "scheduled-stop-560737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:36.935166   31772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/config.json ...
	I1123 10:16:36.935383   31772 mustload.go:66] Loading cluster: scheduled-stop-560737
	I1123 10:16:36.935501   31772 config.go:182] Loaded profile config "scheduled-stop-560737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-560737 -n scheduled-stop-560737
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560737 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:16:37.216609   31818 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:16:37.216841   31818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:37.216849   31818 out.go:374] Setting ErrFile to fd 2...
	I1123 10:16:37.216853   31818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:16:37.217045   31818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 10:16:37.217270   31818 out.go:368] Setting JSON to false
	I1123 10:16:37.217459   31818 daemonize_unix.go:73] killing process 31807 as it is an old scheduled stop
	I1123 10:16:37.217556   31818 mustload.go:66] Loading cluster: scheduled-stop-560737
	I1123 10:16:37.217880   31818 config.go:182] Loaded profile config "scheduled-stop-560737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:16:37.217961   31818 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/config.json ...
	I1123 10:16:37.218135   31818 mustload.go:66] Loading cluster: scheduled-stop-560737
	I1123 10:16:37.218235   31818 config.go:182] Loaded profile config "scheduled-stop-560737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 10:16:37.223176    7590 retry.go:31] will retry after 68.719µs: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.224342    7590 retry.go:31] will retry after 207.123µs: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.225527    7590 retry.go:31] will retry after 271.513µs: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.226655    7590 retry.go:31] will retry after 310.164µs: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.227787    7590 retry.go:31] will retry after 442.946µs: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.228913    7590 retry.go:31] will retry after 739.775µs: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.229978    7590 retry.go:31] will retry after 1.580122ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.232142    7590 retry.go:31] will retry after 1.944447ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.234348    7590 retry.go:31] will retry after 2.186662ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.237534    7590 retry.go:31] will retry after 3.770875ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.241746    7590 retry.go:31] will retry after 8.084581ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.249913    7590 retry.go:31] will retry after 8.573445ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.259195    7590 retry.go:31] will retry after 18.374404ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.278462    7590 retry.go:31] will retry after 26.472559ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.305710    7590 retry.go:31] will retry after 15.859257ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
I1123 10:16:37.322020    7590 retry.go:31] will retry after 39.059717ms: open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560737 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560737 -n scheduled-stop-560737
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-560737
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560737 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:17:02.908890   31975 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:17:02.909226   31975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:02.909236   31975 out.go:374] Setting ErrFile to fd 2...
	I1123 10:17:02.909241   31975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:02.909459   31975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 10:17:02.909742   31975 out.go:368] Setting JSON to false
	I1123 10:17:02.909833   31975 mustload.go:66] Loading cluster: scheduled-stop-560737
	I1123 10:17:02.910183   31975 config.go:182] Loaded profile config "scheduled-stop-560737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:02.910256   31975 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/scheduled-stop-560737/config.json ...
	I1123 10:17:02.910469   31975 mustload.go:66] Loading cluster: scheduled-stop-560737
	I1123 10:17:02.910583   31975 config.go:182] Loaded profile config "scheduled-stop-560737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1123 10:17:38.476719    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-560737
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-560737: exit status 7 (59.63888ms)

                                                
                                                
-- stdout --
	scheduled-stop-560737
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560737 -n scheduled-stop-560737
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560737 -n scheduled-stop-560737: exit status 7 (58.083009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-560737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-560737
--- PASS: TestScheduledStopUnix (108.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (132.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.487239320 start -p running-upgrade-222526 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1123 10:19:09.969138    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.487239320 start -p running-upgrade-222526 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m15.086868594s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-222526 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-222526 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.323980266s)
helpers_test.go:175: Cleaning up "running-upgrade-222526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-222526
--- PASS: TestRunningBinaryUpgrade (132.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (528.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-356629 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-356629 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.232297455s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-356629
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-356629: (1.835980158s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-356629 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-356629 status --format={{.Host}}: exit status 7 (58.874574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-356629 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-356629 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.151407534s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-356629 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-356629 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-356629 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (100.025191ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-356629] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-356629
	    minikube start -p kubernetes-upgrade-356629 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3566292 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-356629 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-356629 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-356629 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m40.509898229s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-356629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-356629
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-356629: (1.93965703s)
--- PASS: TestKubernetesUpgrade (528.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564438 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-564438 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (90.178261ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-564438] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (86.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564438 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-564438 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m26.373030136s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-564438 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (86.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-546508 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-546508 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (138.138183ms)

                                                
                                                
-- stdout --
	* [false-546508] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:17:52.042459   33432 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:17:52.042691   33432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:52.042702   33432 out.go:374] Setting ErrFile to fd 2...
	I1123 10:17:52.042706   33432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:17:52.042952   33432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-3638/.minikube/bin
	I1123 10:17:52.043436   33432 out.go:368] Setting JSON to false
	I1123 10:17:52.044397   33432 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3610,"bootTime":1763889462,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 10:17:52.044445   33432 start.go:143] virtualization: kvm guest
	I1123 10:17:52.046486   33432 out.go:179] * [false-546508] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 10:17:52.047728   33432 notify.go:221] Checking for updates...
	I1123 10:17:52.047736   33432 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:17:52.048968   33432 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:17:52.050458   33432 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-3638/kubeconfig
	I1123 10:17:52.051742   33432 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-3638/.minikube
	I1123 10:17:52.056484   33432 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 10:17:52.057899   33432 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:17:52.060072   33432 config.go:182] Loaded profile config "NoKubernetes-564438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:52.060267   33432 config.go:182] Loaded profile config "force-systemd-env-680045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:52.060437   33432 config.go:182] Loaded profile config "offline-crio-530151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:17:52.060560   33432 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:17:52.095620   33432 out.go:179] * Using the kvm2 driver based on user configuration
	I1123 10:17:52.096828   33432 start.go:309] selected driver: kvm2
	I1123 10:17:52.096847   33432 start.go:927] validating driver "kvm2" against <nil>
	I1123 10:17:52.096874   33432 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:17:52.099340   33432 out.go:203] 
	W1123 10:17:52.100536   33432 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1123 10:17:52.101796   33432 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-546508 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-546508" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-546508

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546508"

                                                
                                                
----------------------- debugLogs end: false-546508 [took: 3.11871734s] --------------------------------
helpers_test.go:175: Cleaning up "false-546508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-546508
--- PASS: TestNetworkPlugins/group/false (3.41s)

                                                
                                    
x
+
TestISOImage/Setup (74.83s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-693686 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-693686 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m14.830090167s)
--- PASS: TestISOImage/Setup (74.83s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1123 10:19:26.893370    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-564438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.497044902s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-564438 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-564438 status -o json: exit status 2 (254.220171ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-564438","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-564438
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2147923534 start -p stopped-upgrade-953016 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2147923534 start -p stopped-upgrade-953016 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m1.461927238s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2147923534 -p stopped-upgrade-953016 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2147923534 -p stopped-upgrade-953016 stop: (1.733087345s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-953016 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-953016 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.283599142s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-564438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.170284422s)
--- PASS: TestNoKubernetes/serial/Start (45.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21968-3638/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-564438 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-564438 "sudo systemctl is-active --quiet service kubelet": exit status 1 (160.197144ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (2.511303936s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.451885599s)
--- PASS: TestNoKubernetes/serial/ProfileList (5.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-564438
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-564438: (1.300774288s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (64.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564438 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-564438 --driver=kvm2  --container-runtime=crio: (1m4.604656968s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (64.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-953016
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-564438 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-564438 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.157611ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestPause/serial/Start (107.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-312037 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-312037 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m47.990018228s)
--- PASS: TestPause/serial/Start (107.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (53.820580557s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-312037 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-312037 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.644569042s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-546508 "pgrep -a kubelet"
I1123 10:24:23.836651    7590 config.go:182] Loaded profile config "auto-546508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-546508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5rp6f" [37a36f69-0f7e-48ad-ac1f-b5a40c95def2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5rp6f" [37a36f69-0f7e-48ad-ac1f-b5a40c95def2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003568039s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-312037 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-312037 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-312037 --output=json --layout=cluster: exit status 2 (241.927659ms)

                                                
                                                
-- stdout --
	{"Name":"pause-312037","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-312037","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-312037 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-312037 --alsologtostderr -v=5
E1123 10:24:26.893331    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/addons-894046/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.9s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-312037 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.309878634s)
--- PASS: TestPause/serial/VerifyDeletedResources (15.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-546508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m0.534299921s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (89.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m29.305412535s)
--- PASS: TestNetworkPlugins/group/calico/Start (89.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-wnskr" [b24f0607-2abb-4a62-b0fa-a5300ac308f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00641185s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-546508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-546508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lmcn7" [0061b588-b3e7-4ff7-b1f4-6c836fd47737] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lmcn7" [0061b588-b3e7-4ff7-b1f4-6c836fd47737] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.006238745s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.509218207s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-546508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m20.687711957s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-dmmh8" [c56e63bc-855f-4182-b0a4-031992fe8ee8] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-dmmh8" [c56e63bc-855f-4182-b0a4-031992fe8ee8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004400451s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-546508 "pgrep -a kubelet"
I1123 10:26:24.751818    7590 config.go:182] Loaded profile config "calico-546508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-546508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vnphl" [79d4162a-e1a5-49a3-a423-5d850735913f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vnphl" [79d4162a-e1a5-49a3-a423-5d850735913f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005553708s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-546508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m15.16255534s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-546508 "pgrep -a kubelet"
I1123 10:27:05.760600    7590 config.go:182] Loaded profile config "custom-flannel-546508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-546508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wcqst" [9c47524c-4fb0-4d59-b9d3-63e72469100d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wcqst" [9c47524c-4fb0-4d59-b9d3-63e72469100d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005774735s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-546508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-546508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m27.119813077s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-546508 "pgrep -a kubelet"
I1123 10:27:38.271497    7590 config.go:182] Loaded profile config "enable-default-cni-546508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-546508 replace --force -f testdata/netcat-deployment.yaml
E1123 10:27:38.476547    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/functional-173031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4b6zx" [c9c21dce-5774-42db-9820-b6b0868f4c95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4b6zx" [c9c21dce-5774-42db-9820-b6b0868f4c95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007048917s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-546508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (95.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-051038 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-051038 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m35.837914044s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (95.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (124.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-303480 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-303480 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (2m4.906760494s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (124.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-lq7hf" [f95b93af-82f4-48c8-b68a-4040a9e362f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004088562s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-546508 "pgrep -a kubelet"
I1123 10:28:15.874083    7590 config.go:182] Loaded profile config "flannel-546508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-546508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gp7x8" [78508122-bea6-4184-84b1-2fdebb76c5e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gp7x8" [78508122-bea6-4184-84b1-2fdebb76c5e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006685685s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-546508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-327008 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-327008 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.803757808s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-546508 "pgrep -a kubelet"
I1123 10:29:00.710529    7590 config.go:182] Loaded profile config "bridge-546508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-546508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mvt6z" [0c3a7141-4124-4c95-8056-09ad71144710] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mvt6z" [0c3a7141-4124-4c95-8056-09ad71144710] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005683251s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-546508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-546508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-009587 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:29:29.189726    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/auto-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:29:34.311879    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/auto-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-009587 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.386451598s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-051038 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c6bb0169-fe6d-408b-b390-3202a45bb63b] Pending
helpers_test.go:352: "busybox" [c6bb0169-fe6d-408b-b390-3202a45bb63b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1123 10:29:44.554129    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/auto-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [c6bb0169-fe6d-408b-b390-3202a45bb63b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 14.005925452s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-051038 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (14.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-051038 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-051038 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.096673518s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-051038 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-051038 --alsologtostderr -v=3
E1123 10:30:05.036480    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/auto-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-051038 --alsologtostderr -v=3: (1m26.840497157s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-303480 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b0ec2e74-68f7-4445-b428-447b84b6abdc] Pending
helpers_test.go:352: "busybox" [b0ec2e74-68f7-4445-b428-447b84b6abdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b0ec2e74-68f7-4445-b428-447b84b6abdc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 14.004293559s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-303480 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (15.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-327008 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f2a0c9d7-ae9e-4f0c-9aea-56887597fba8] Pending
helpers_test.go:352: "busybox" [f2a0c9d7-ae9e-4f0c-9aea-56887597fba8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f2a0c9d7-ae9e-4f0c-9aea-56887597fba8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 15.004916126s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-327008 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (15.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-303480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-303480 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (87.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-303480 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-303480 --alsologtostderr -v=3: (1m27.964770138s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (87.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-327008 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-327008 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (72.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-327008 --alsologtostderr -v=3
E1123 10:30:44.079700    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:44.086127    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:44.097482    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:44.118836    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:44.160269    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:44.241676    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:44.403247    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:44.725002    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:45.367043    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:45.998649    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/auto-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:46.649039    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:49.211167    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:54.333200    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-327008 --alsologtostderr -v=3: (1m12.730027237s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (72.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-009587 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4eb99e3e-18a0-4a1a-b15b-f739c19b8d1d] Pending
helpers_test.go:352: "busybox" [4eb99e3e-18a0-4a1a-b15b-f739c19b8d1d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4eb99e3e-18a0-4a1a-b15b-f739c19b8d1d] Running
E1123 10:31:04.574596    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 14.004435645s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-009587 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-009587 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-009587 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-009587 --alsologtostderr -v=3
E1123 10:31:18.576165    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:18.582518    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:18.593844    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:18.615179    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:18.656971    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:18.738474    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:18.900010    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:19.221696    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:19.863761    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:21.146009    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-009587 --alsologtostderr -v=3: (1m31.026052979s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-051038 -n old-k8s-version-051038
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-051038 -n old-k8s-version-051038: exit status 7 (59.745982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-051038 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-051038 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1123 10:31:23.707931    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:25.056450    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:28.830064    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:31:39.072212    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-051038 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (46.375868308s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-051038 -n old-k8s-version-051038
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-327008 -n embed-certs-327008
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-327008 -n embed-certs-327008: exit status 7 (60.707595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-327008 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-327008 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-327008 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (48.955400116s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-327008 -n embed-certs-327008
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-303480 -n no-preload-303480
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-303480 -n no-preload-303480: exit status 7 (65.116158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-303480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (68.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-303480 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:31:59.553502    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:05.969465    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:05.975832    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:05.987544    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:06.008985    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:06.018111    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/kindnet-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:06.050602    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:06.131904    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:06.294183    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:06.616162    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:07.258515    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:07.920492    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/auto-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:08.540360    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-303480 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m8.337075733s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-303480 -n no-preload-303480
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (68.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xj6s8" [9fbcb8dc-2219-416c-bd6d-40c7f4d6802d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1123 10:32:11.102022    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:16.224729    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xj6s8" [9fbcb8dc-2219-416c-bd6d-40c7f4d6802d] Running
E1123 10:32:26.466782    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004865368s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xj6s8" [9fbcb8dc-2219-416c-bd6d-40c7f4d6802d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004263898s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-051038 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-469cd" [5e2a3799-7f57-4628-9522-4d7682fb9d68] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-469cd" [5e2a3799-7f57-4628-9522-4d7682fb9d68] Running
E1123 10:32:43.665887    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:46.949066    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/custom-flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005247484s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-051038 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-051038 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-051038 -n old-k8s-version-051038
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-051038 -n old-k8s-version-051038: exit status 2 (245.817502ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-051038 -n old-k8s-version-051038
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-051038 -n old-k8s-version-051038: exit status 2 (251.193304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-051038 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-051038 -n old-k8s-version-051038
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-051038 -n old-k8s-version-051038
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-771699 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:32:38.696758    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:38.858389    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:39.180226    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:39.822424    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:32:40.515718    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-771699 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (50.611335677s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587: exit status 7 (74.637814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-009587 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (67.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-009587 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:32:41.104394    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-009587 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m7.310138635s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (67.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-469cd" [5e2a3799-7f57-4628-9522-4d7682fb9d68] Running
E1123 10:32:48.787645    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003757066s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-327008 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-327008 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-327008 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-327008 -n embed-certs-327008
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-327008 -n embed-certs-327008: exit status 2 (212.7423ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-327008 -n embed-certs-327008
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-327008 -n embed-certs-327008: exit status 2 (215.621784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-327008 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-327008 -n embed-certs-327008
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-327008 -n embed-certs-327008
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.17s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
--- PASS: TestISOImage/VersionJSON (0.17s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-693686 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9h78n" [4e17ff4a-a67b-4c18-a8c0-15d5e104bb16] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9h78n" [4e17ff4a-a67b-4c18-a8c0-15d5e104bb16] Running
E1123 10:33:09.652282    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:09.658788    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:09.670244    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:09.691688    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:09.733216    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:09.814764    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:09.976430    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:10.298209    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:10.940554    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:12.222834    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.115740887s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9h78n" [4e17ff4a-a67b-4c18-a8c0-15d5e104bb16] Running
E1123 10:33:14.785813    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:19.511878    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:33:19.908219    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005729092s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-303480 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-303480 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-303480 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-303480 --alsologtostderr -v=1: (1.145294861s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-303480 -n no-preload-303480
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-303480 -n no-preload-303480: exit status 2 (273.371377ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-303480 -n no-preload-303480
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-303480 -n no-preload-303480: exit status 2 (261.666323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-303480 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-303480 -n no-preload-303480
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-303480 -n no-preload-303480
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-771699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1123 10:33:30.149968    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-771699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106524086s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-771699 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-771699 --alsologtostderr -v=3: (10.758336163s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-771699 -n newest-cni-771699
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-771699 -n newest-cni-771699: exit status 7 (59.093804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-771699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-771699 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-771699 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (36.372175553s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-771699 -n newest-cni-771699
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m598b" [7b3836aa-60d4-4193-936b-cd071f22cbbb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1123 10:33:50.631818    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/flannel-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m598b" [7b3836aa-60d4-4193-936b-cd071f22cbbb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005746623s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m598b" [7b3836aa-60d4-4193-936b-cd071f22cbbb] Running
E1123 10:34:00.473306    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/enable-default-cni-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:00.982494    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:00.988879    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:01.000215    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:01.021577    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:01.063042    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:01.144445    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:01.305987    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:01.627243    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:02.269272    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:02.437804    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/calico-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:03.551464    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:34:06.113024    7590 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-3638/.minikube/profiles/bridge-546508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004327333s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-009587 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-009587 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-009587 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-009587 --alsologtostderr -v=1: (1.279658266s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587: exit status 2 (210.754883ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587: exit status 2 (217.384701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-009587 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-009587 -n default-k8s-diff-port-009587
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-771699 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-771699 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-771699 --alsologtostderr -v=1: (1.119984003s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-771699 -n newest-cni-771699
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-771699 -n newest-cni-771699: exit status 2 (284.059842ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-771699 -n newest-cni-771699
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-771699 -n newest-cni-771699: exit status 2 (282.950934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-771699 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-771699 -n newest-cni-771699
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-771699 -n newest-cni-771699
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    

Test skip (40/351)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.49
267 TestNetworkPlugins/group/cilium 3.58
297 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-894046 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-546508 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-546508" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-546508

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546508"

                                                
                                                
----------------------- debugLogs end: kubenet-546508 [took: 3.318118943s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-546508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-546508
--- SKIP: TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-546508 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-546508" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-546508

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-546508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546508"

                                                
                                                
----------------------- debugLogs end: cilium-546508 [took: 3.418306737s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-546508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-546508
--- SKIP: TestNetworkPlugins/group/cilium (3.58s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-743857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-743857
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard